Video Face Re-Aging: Toward Temporally Consistent Face Re-Aging

 

Teaser

De-aging

Abstract

Video face re-aging deals with altering the apparent age of a person to the target age in videos. This problem is challenging due to the lack of paired video datasets maintaining temporal consistency in identity and age. Most re-aging methods process each image individually without considering the temporal consistency of videos. While some existing works address the issue of temporal coherence through video facial attribute manipulation in latent space, they often fail to deliver satisfactory performance in age transformation. To tackle the issues, we propose (1) a novel synthetic video dataset that features subjects across a diverse range of age groups; (2) a baseline architecture designed to validate the effectiveness of our proposed dataset, and (3) the development of three novel metrics tailored explicitly for evaluating the temporal consistency of video re-aging techniques. Our comprehensive experiments on public datasets, such as VFHQ and CelebV-HQ, show that our method outperforms the existing approaches in terms of both age transformation and temporal consistency.

Method

Firstly, high-resolution synthetic facial images are created using StyleGAN. Subsequently, images of individuals at different target ages are generated using existing re-aging methods for age transformation. Next, key frames are produced by employing reentactment methods, which alters the pose and expression of these synthetic images. Finally, motion is added to these key frames using frame interolation methods, creating smooth and high-fidelity motion videos of subjects at different ages.

More results

Our method can be applied to various Internet videos, showing its efficacy in diverse and challenging conditions.