VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

Jiaqi Lin1*   Zhihao Li2*   Xiao Tang2   Jianzhuang Liu3   Shiyong Liu2   Jiayue Liu1  
Yangdi Lu2   Xiaofei Wu2   Songcen Xu2   Youliang Yan2   Wenming Yang1†
1Tsinghua University     2Huawei Noah's Ark Lab     3Shenzhen Institute of Advanced Technology
linjq22@mails.tsinghua.edu.cn   zhihao.li@huawei.com   yang.wenming@sz.tsinghua.edu.cn
Equal contribution     † Corresponding author
CVPR 2024

Abstract

Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations. To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting. We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization. We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images. Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.

Comparison With SOTA

Architecture

Please click the videos for better view.

BibTeX

@inproceedings{lin2024vastgaussian,
  title     = {VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction},
  author    = {Lin, Jiaqi and Li, Zhihao and Tang, Xiao and Liu, Jianzhuang and Liu, Shiyong and Liu, Jiayue and Lu, Yangdi and Wu, Xiaofei and Xu, Songcen and Yan, Youliang and Yang, Wenming},
  booktitle = {CVPR},
  year      = {2024}
}