GuideFlow3D LogouideFlow3D
Optimization-Guided Rectified Flow For Appearance Transfer
Sayan Deb Sarkar 1
Sinisa Stekovic 2
Vincent Lepetit 2
Iro Armeni 1
1 Stanford University
2 ENPC, IP Paris
NeurIPS 2025
TL;DR: A training-free method that steers pre-trained generative rectified flow with differentiable guidance for robust, geometry-aware 3D appearance transfer across shapes and modalities.
Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.
Appearance Based Transfer

Transfers fine-grained texture and material details through part-aware correspondence between shapes.

Self-Similarity Based Transfer

Captures coherent structure and detail by guiding transfer through internal self-similarity cues.

Application | 3D Scene Editing

Seamlessly stylize objects while preserving their geometry and spatial layout for interactive 3D context-aware scene restyling.

Methodology

Overview of the method

We introduce GuideFlow3D, a training-free framework for 3D appearance transfer that enables fine-grained control over both geometry and texture, even across objects with large shape differences. Unlike prior 3D style transfer methods that require retraining or rely on multi-view diffusion, our approach directly steers a pretrained 3D generative model during inference through guided rectified flow sampling. This mechanism interleaves latent flow updates with differentiable optimization, allowing the model to adaptively incorporate new guidance objective without modifying its learned parameters.

During the denoising process, we apply two complementary guidance strategies: (i) a part-aware appearance loss that co-segments the input and appearance objects to align textures and geometry across corresponding parts, and (ii) a self-similarity loss that enforces internal consistency in local regions when appearance cues are derived from text or images. This unified design allows our framework to seamlessly transfer material and structural details from diverse modalities such as 3D meshes, images, or natural language, onto new geometries. As a result, it bridges the gap between geometric and perceptual style transfer in 3D, producing coherent, detailed assets robust to large geometric variations. Its training-free formulation and structured latent design make it efficient, versatile, and easily extendable to new forms of guidance for controllable 3D generation.

Concurrent Works

Several concurrent works explore related directions in 3D appearance transfer and generation. Check them out!

Citation

If you find our work useful, please consider citing:

@inproceedings{sayandsarkar_2025_guideflow3d, author = {Deb Sarkar, Sayan and Stekovic, Sinisa and Lepetit, Vincent and Armeni, Iro}, title = {GuideFlow3D: Optimization-Guided Rectified Flow For 3D Appearance Transfer}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2025}, }

Ethical Considerations
Next to the exciting possibilities, there are considerable risks that should be addressed including manipulation and Deepfakes for spreading misinformation, concerns regarding intellectual property, and bias amplifications. Ethical usage of our method includes aspects of disclosing when 3D content is generated using AI, respecting and attributing source content licenses, and building systems for understanding biases are some of the ways for tackling these issues.

Acknowledgements
We thank Nicolas Dufour and Arijit Ghosh from Imagine Labs for helpful discussions on universal guidance, and Liyuan Zhu and Jianhao Zheng from Gradient Spaces Research Group for help with conducting the user study. Website template inspired by TRELLIS.

GuideFlow3D LogouideFlow3D: Optimization-Guided Rectified Flow For 3D Appearance Transfer
Contact Apache License 2.0 © 2025