Depth estimation and 3D reconstruction have been extensively studied as core topics in computer vision. Starting from rigid objects with relatively simple geometric shapes, such as vehicles, the research has expanded to address general objects, including challenging deformable objects, such as humans and animals. In particular, for the animal, however, the majority of the existing models are trained based on datasets without metric scale, which can help validate image-only models. To address this limitation, we present WildDepth, a multimodal dataset and benchmark suite for depth estimation, behavior detection, and 3D reconstruction from diverse categories of animals ranging from domestic to wild environments with synchronized RGB and LiDAR. Experimental results show that the use of multi-modal data improves depth reliability by up to 10% RMSE, while RGB–LiDAR fusion enhances 3D reconstruction fidelity by 12% in Chamfer distance. By releasing WildDepth and its benchmarks, we aim to foster robust multimodal perception systems that generalize across domains.
Browse synchronized RGB and LiDAR recordings, 3D point clouds, and depth maps interactively in our live data viewer.
Open Data Viewer@inproceedings{wilddepth2025,
title = {WildDepth: A Multimodal Dataset for 3D Wildlife Perception and Depth Estimation},
author = {Aamir, Muhammad and Muramatsu, Naoya and Shin, Sangyun and Wijers, Matthew and Jhong, Jiaxing and Hou, Xinyu and Patel, Amir and Markham, Andrew},
year = {2025},
note = {Preprint}
}