UniEdit-Flow
Unleashing Inversion and Editing
in the Era of Flow Models

Guanlong Jiao1,3  Biqing Huang1  Kuan-Chieh Wang2  Renjie Liao3 

1Tsinghua University  2Snap Inc.  3The University of British Columbia 



UniEdit-Flow for image inversion and editing. Our approach proposes a highly accurate and efficient, model-agnostic, training and tuning-free sampling strategy for flow models to tackle image inversion and editing problems. Cluttered scenes are difficult for inversion and reconstruction, leading to failure results on various methods. Our Uni-Inv achieves exact reconstruction even in such complex situations (1st line). Furthermore, existing flow editing always maintain undesirable affects, out region-aware sampling-based Uni-Edit showcases excellent performance for both editing and background preservation (2nd line).


Quantitative Comparison


Text-driven image editing comparison on PIE-Bench. We report the peer-reviewed results of each baseline, and evaluate our proposed Uni-Edit using the relatively lightweight Stable Diffusion 3 (SD3) and FLUX to demonstrate the effectiveness. The best and second best results are bolded and underlined, respectively. Cells are highlighted from worse to better.

Text-driven image editing comparison on PIE-Bench based on Diffusion models. We evaluate our proposed Uni-Edit using SDXL (\(\texttt{RealVisXL_V4.0}\)). We keep the same hyper-parameter setting with our main experiments (i.e., \(\alpha\) = 0.6 and \(\omega\) = 5), and adopt 50 and 15 as steps. Besides tuning-based methods are marked in gray, the best and second best results are bolded and underlined, respectively.


Qualitative Comparison






BibTeX

@misc{jiao2025unieditflowunleashinginversionediting,
    title={UniEdit-Flow: Unleashing Inversion and Editing in the Era of Flow Models}, 
    author={Guanlong Jiao and Biqing Huang and Kuan-Chieh Wang and Renjie Liao},
    year={2025},
    eprint={2504.13109},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2504.13109}, 
}