Unsupervised 3D Shape Completion through GAN Inversion


Junzhe Zhang
Xinyi Chen
Zhongang Cai
Liang Pan
Haiyu Zhao

Shuai Yi
Chai Kiat Yeo
Bo Dai
Chen Change Loy

Nanyang Technological University
SenseTime Research
Shanghai AI Laboratory

In CVPR 2021



Abstract

Most 3D shape completion approaches rely heavily on partial-complete shape pairs and learn in a fully supervised manner. Despite their impressive performances on in-domain data, when generalizing to partial shapes in other forms or real-world partial scans, they often obtain unsatisfactory results due to domain gaps. In contrast to previous fully supervised approaches, in this paper we present ShapeInversion, which introduces Generative Adversarial Network (GAN) inversion to shape completion for the first time. ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best reconstructs the given partial input. In this way, ShapeInversion no longer needs paired training data, and is capable of incorporating the rich prior captured in a well-trained generative model. On the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA unsupervised method, and is comparable with supervised methods that learned using paired data. It also demonstrates remarkable generalization ability, giving robust results for real-world scans and partial inputs of various forms and incompleteness levels. Importantly, ShapeInversion naturally enables a series of additional abilities thanks to the involvement of a pre-trained GAN, such as producing multiple valid complete shapes for an ambiguous partial input, as well as shape manipulation and interpolation.


Method Overview

Trulli
Fig. 1 We formally introduce GAN inversion in 3D shape completion. A latent vector z is used by the pre-trained generator G to reconstruct a complete shape xc. A degradation function M then transforms xc into a partial shape xp. The supervision signal includes the Chamfer Distance and the Feature Distance between xp and the input partial shape xin. ShapeInversion looks for a latent vector z and finetunes the parameters of G that best reconstruct the complete shape corresponding to xin via gradient descent. The key insight is that we address intrinsic challenges due to the nature of 3D data. Please refer to our paper for details.

Results

ShapeInversion demonstrates compelling performance for shape completion in different scenarios. First, on a common benchmark derived from ShapeNet, it outperforms the SOTA unsupervised methods by a significant margin, and is comparable to various supervised methods. Second, our method shows considerable generalization ability and robustness when it comes to real-world scans or variation in partial forms and incompleteness levels, whereas supervised methods exhibit significant performance drops due to domain mismatches. Third, given more extreme incompleteness that causes ambiguity, our method is able to provide multiple valid complete shapes, all of which remain faithful to the visible parts presented in the partial input. Fourth, we qualitatively show that our method achieves novel shape manipulation with plausible results for shape jittering and morphing.

Trulli
Tab. 1 On a common benchmark derived from ShapeNet, ShapeInversion outperforms the SOTA unsupervised methods by a significant margin, and is comparable to various supervised methods.


Trulli
Fig. 2 While existing methods are biaed towards a certain form of incomplete shapes, our method
generalizes well across various domain of partial forms. In-domain results are in dark green whereas out-of-domain ones are in purple.


Trulli
Fig. 3 ShapeInversion gives robust results on real-world partial scans.


Trulli
Fig. 4 ShapeInversion can give multiple valid outputs when higher incompleteness level of partial shapes impose ambiguity.


Trulli
Fig. 5 ShapeInversion enables manipulation of complete shapes: (a) changing an object into other plausible shapes of different geometries; (b) making a sound transition from one shape to another.



Paper and Code




[Paper]  

[Code]



Citation

@inproceedings{zhang2021unsupervised,
    title = {Unsupervised 3D Shape Completion through GAN Inversion},
    author = {Zhang, Junzhe and Chen, Xinyi and Cai, Zhongang and Pan, Liang and Zhao, Haiyu 
and Yi, Shuai and Yeo, Chai Kiat and Dai, Bo and Loy, Chen Change}, booktitle = {CVPR}, year = {2021}}