FastFaceCLIP: A lightweight text-driven high-quality face image manipulation

JQ Ren and JP Qin and QL Ma and Y Cao, IET COMPUTER VISION, 18, 950-967 (2024).

DOI: 10.1049/cvi2.12295

Although many new methods have emerged in text-driven images, the large computational power required for model training causes these methods to have a slow training process. Additionally, these methods consume a considerable amount of video random access memory (VRAM) resources during training. When generating high-resolution images, the VRAM resources are often insufficient, which results in the inability to generate high-resolution images. Nevertheless, recent Vision Transformers (ViTs) advancements have demonstrated their image classification and recognition capabilities. Unlike the traditional Convolutional Neural Networks based methods, ViTs have a Transformer- based architecture, leverage attention mechanisms to capture comprehensive global information, moreover enabling enhanced global understanding of images through inherent long-range dependencies, thus extracting more robust features and achieving comparable results with reduced computational load. The adaptability of ViTs to text-driven image manipulation was investigated. Specifically, existing image generation methods were refined and the FastFaceCLIP method was proposed by combining the image-text semantic alignment function of the pre- trained CLIP model with the high-resolution image generation function of the proposed FastFace. Additionally, the Multi-Axis Nested Transformer module was incorporated for advanced feature extraction from the latent space, generating higher-resolution images that are further enhanced using the Real-ESRGAN algorithm. Eventually, extensive face manipulation-related tests on the CelebA-HQ dataset challenge the proposed method and other related schemes, demonstrating that FastFaceCLIP effectively generates semantically accurate, visually realistic, and clear images using fewer parameters and less time. A novel image control method that synergises the robust generative capacity of FastFace, rooted in the ViT model architecture, with the extraordinary visual-text encoding prowess of CLIP was developed. The proposed scheme demonstrates its effectiveness in editing a variety of real and cartoon portraits, achieving some manipulations unattainable with the current annotation-dependent methods. Moreover, CLIP facilitates fine-grained editing controls, such as specifying desired hairstyles. This method also allows for manipulating the intensity of image features and editing images using text prompts encompassing multiple semantic meanings. image

Return to Publications page