Dissecting and Mitigating Semantic Discrepancy in Stable Diffusion for Image-to-Image Translation
YF Yuan and GQ Yang and JZ Wang and H Zhang and HM Shan and FY Wang and JP Zhang, IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 12, 705-718 (2025).
DOI: 10.1109/JAS.2024.124800
Finding suitable initial noise that retains the original image's information is crucial for image-to-image (I2I) translation using text- to-image (T2I) diffusion models. A common approach is to add random noise directly to the original image, as in SDEdit. However, we have observed that this can result in "semantic discrepancy" issues, wherein T2I diffusion models mis-interpret the semantic relationships and generate content not present in the original image. We identify that the noise introduced by SDEdit disrupts the semantic integrity of the image, leading to unintended associations between unrelated regions after U-Net upsampling. Building on the widely-used latent diffusion model, Stable Diffusion, we propose a training-free, plug-and-play method to alleviate semantic discrepancy and enhance the fidelity of the translated image. By leveraging the deterministic nature of denoising diffusion implicit models (DDIMs) inversion, we correct the erroneous features and correlations from the original generative process with accurate ones from DDIM inversion. This approach alleviates semantic discrepancy and surpasses recent DDIM-inversion-based methods such as PnP with fewer priors, achieving a speedup of 11.2 times in experiments conducted on COCO, ImageNet, and ImageNet-R datasets across multiple I2I translation tasks.
Return to Publications page