3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion. Conventional methods for completing these partial point clouds rely on manually organized training sets and are usually limited to object categories seen during training. In this work, we propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training. Leveraging point rendering via Gaussian Splatting, we develop techniques of Partial Gaussian Initialization, Zero-shot Fractal Completion, and Point Cloud Extraction that utilize priors from pre-trained 2D diffusion models to infer missing regions and extract uniform completed point clouds. Experimental results on both synthetic and real-world scanned point clouds demonstrate that our approach outperforms existing methods in completing a variety of objects.
Illustration of our framework. In Partial Gaussian Initialization (PGI), Reference Viewpoint Estimation estimates a camera pose Vp where Pin can be most completely observed. We initialize 3D Gaussians Gin from Pin and render the reference image Iin under Vp. In Zero-shot Fractal Completion (ZFC), 3D Gaussians Gm begins with an initialization using noisy PN and undergoes optimization guided by view-dependent guidance from the diffusion model fZ in Zero 1-to-3 based on a randomly chosen camera pose Vi. Additionally, it incorporates a Preservation Constraint computed with respect to Vp. Gin is mixed with Gm to form Gall, introducing the partial geometry. After ZFC, we propose Point Cloud Extraction (PCE) to extract surface points, and convert them into uniform output with Grid Pulling.
@article{huang2024zero,
title={Zero-shot Point Cloud Completion Via 2D Priors},
author={Huang, Tianxin and Yan, Zhiwen and Zhao, Yuyang and Lee, Gim Hee},
journal={arXiv preprint arXiv:2404.06814},
year={2024}
}