NeuralFur Teaser From multi-view captures our method NeuralFur reconstructs detailed geometries of animals with a mesh-based body and strand-based fur. The reconstructions can be integrated in computer graphics frameworks and rendered with artist defined colors.

Abstract

Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that could be leveraged to learn a fur prior for different animals.

In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given calibrated multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a visual question answering (VQA) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal’s furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VQA to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss.

With this new schema of using a VQA model to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types.

Video Presentation

Main idea

Our method, NeuralFur, consists of two stages: (i) extracting a furless mesh geometry by shrinking the full mesh reconstructed from multi-view images, and (ii) reconstructing strand-based fur by initializing roots from the furless mesh. For both stages, external knowledge from a VLM is leveraged. Based on the depicted animal, the VLM provides information about fur thickness, length, and orientation. This guidance is then used to train a neural fur strand representation (MLP), which can be queried at any mesh surface location to generate fur strands suitable for rendering.

Defurring Results

Image
Neus Geometry
Furless Geometry
Overlay

Comparison

Qualitative results of our reconstruction method compared with existing baselines - SMAL, Neus, GenZoo, GaussianHaircut.

Image
SMAL
Neus
GenZoo
GaussianHaircut
NeuralFur (Ours)

Comparison with GaussianHaircut

GaussianHaircut
NeuralFur (Ours)

Applications

Physics simulation

The fur reconstructed by NeuralFur is compatible with physics-based simulation and can be used to synthesize effects such as wind motion.

Retargeting

The fur reconstructed by NeuralFur can be retargeted to different poses using SMAL model.

Fur Editing

The fur reconstructed by NeuralFur is editable as the fur reconstruction controls how much fur is grown on each annotated body part.

Acknowledgements and Disclosure

Vanessa Sklyarova is supported by the Max Planck ETH Center for Learning Systems. Berna Kabadayi is supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS). Justus Thies is supported by the ERC Starting Grant 101162081 ``LeMo'' and the DFG Excellence Strategy— EXC-3057. The authors would like to thank Peter Kulits and Silvia Zuffi for their discussions on the project, Tomasz Niewiadomski for providing results on GenZoo, and Benjamin Pellkofer for IT support.


MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.

BibTeX

@article{sklyarova_kabadayi_2025neuralfur,
    title={NeuralFur: NeuralFur: Animal Fur Reconstruction from Multi-view Images},
    author={Sklyarova, Vanessa and Kabadayi, Berna and Yiannakidis, Anastasios and Becherini, Giorgio and Black, Michael J. and Thies, Justus},
    journal={ArXiv},
    month={January}, 
    year={2026} 
}