Recent years have witnessed great progress in creating vivid audio-driven
portraits from monocular videos. However, how to seamlessly adapt the created
video avatars to other scenarios with different backgrounds and lighting
conditions remains unsolved. On the other hand, existing relighting studies
mostly rely on dynamically lighted or multi-view data, which are too expensive
for creating video portraits. To bridge this gap, we propose ReliTalk, a novel
framework for relightable audio-driven talking portrait generation from
monocular videos. Our key insight is to decompose the portrait's reflectance
from implicitly learned audio-driven facial normals and images. Specifically,
we involve 3D facial priors derived from audio features to predict delicate
normal maps through implicit functions. These initially predicted normals then
take a crucial part in reflectance decomposition by dynamically estimating the
lighting condition of the given video. Moreover, the stereoscopic face
representation is refined using the identity-consistent loss under simulated
multiple lighting conditions, addressing the ill-posed problem caused by
limited views available from a single monocular video. Extensive experiments
validate the superiority of our proposed framework on both real and synthetic
datasets. Our code is released in https://github.com/arthur-qiu/ReliTalk