Existing research has made impressive strides in reconstructing human facial shapes and textures from images with well-illuminated faces and minimal external occlusions. Nevertheless, it remains challenging to recover accurate facial textures from scenarios with complicated illumination affected by external occlusions, e.g. a face that is partially obscured by items such as a hat. Existing works based on the assumption of single and uniform illumination cannot correctly process these data. In this work, we introduce a novel approach to model 3D facial textures under such unnatural illumination. Instead of assuming single illumination, our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions combined with learned neural representations, named Light Decoupling. According to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach in modeling facial textures under challenging illumination affected by occlusions.
In the comparisons, face textures extracted from source images with noticeable external occlusions are used to synthesize target images free of occlusions. The results can confirm that our method effectively recovers clean face textures from images impacted by shadows caused by external or self-occlusions.
Comparisons on Single images.
Comparisons on Video Sequences.
@inproceedings{huanglearning,
title={Learning to Decouple the Lights for 3D Face Texture Modeling},
author={Huang, Tianxin and Zhang, Zhenyu and Tai, Ying and Lee, Gim Hee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}
}