Learning Self-supervised Part-aware 3D Hybrid Representations of 2D Gaussians and Superquadrics

National University of Defense Technology
ICCV 2025

PartGS enables both the block-level and point- level part-aware reconstructions, preserving both part decomposition and reconstruction precision.

PartGS is a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition, leveraging multi-view image inputs to uncover 3D structural information.

Abstract

Low-level 3D representations, such as point clouds, meshes, NeRFs and 3D Gaussians, are commonly used for modeling 3D objects and scenes. However, cognitive studies indicate that human perception operates at higher levels and interprets 3D environments by decomposing them into meaningful structural parts, rather than low-level elements like points or voxels. Structured geometric decomposition enhances scene interpretability and facilitates downstream tasks requiring component-level manipulation.

In this work, we introduce PartGS, a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition, leveraging multi-view image inputs to uncover 3D structural information. Our method jointly optimizes superquadric meshes and Gaussians by coupling their parameters within a hybrid representation. On one hand, superquadrics enable the representation of a wide range of shape primitives, facilitating flexible and meaningful decompositions. On the other hand, 2D Gaussians capture detailed texture and geometric details, ensuring high-fidelity appearance and geometry reconstruction.

Operating in a self-supervised manner, our approach demonstrates superior performance compared to state-of-the-art methods across extensive experiments on the DTU, ShapeNet, and real-world datasets.

Method Overview

Scenethesis Demo

PartGS takes multi-view images to learn a parametric hybrid representation of superquadrics and 2D Gaussians. It initializes from random superquadrics and is gradually optimized during training to get a block-level reconstruction. Then, we free the constraints of Gaussians to model detailed geometry to get a point-level reconstruction. The last column demonstrates one application, physics simulations, where one part is picked out and thrown to other parts.

Experimnets on DTU and ShapeNet dataset

Qualitative comparisons to state-of-the-art 3D part-aware reoncstruction approaches: EMS, MonteBoxFinder, PartNeRF, and DBW. Our method is the only one that provides reasonable 3D part decomposition while capturing detailed geometry.

 Reconstruction

DTU dataset

ShapNet dataset

Application

This section illustrates three applications of our method that orig- inal 3DGS and NeRF-based approaches do not support. Firstly, The part decomposition facilitats easy editing of specific object or scene compo- nents, e.g., adding, moving, removing, or scaling. Secondly, our method enables the creation of new high-quality 3D content. Additionally, our method enables easy manipulation of objects within a scene, supporting physics-based simulations.

Video demonstration of the simulation capabilities

BibTeX


  @misc{gao2025selfsupervisedlearninghybridpartaware,
      title={Self-supervised Learning of Hybrid Part-aware 3D Representation of 2D Gaussians and Superquadrics}, 
      author={Zhirui Gao and Renjiao Yi and Yuhang Huang and Wei Chen and Chenyang Zhu and Kai Xu},
      year={2025},
      eprint={2408.10789},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.10789}, 
}