Generative Occupancy Fields for 3D Surface-Aware Image Synthesis

Xudong Xu1         Xingang Pan2         Dahua Lin1         Bo Dai3 
Advances in Neural Information Processing Systems (NeurIPS) 2021

Abstract


The advent of generative radiance fields has significantly promoted the development of 3D-aware image synthesis. The cumulative rendering process in radiance fields makes training these generative models much easier since gradients are distributed over the entire volume, but leads to diffused object surfaces. In the meantime, compared to radiance fields occupancy representations could inherently ensure deterministic surfaces. However, if we directly apply occupancy representations to generative models, during training they will only receive sparse gradients located on object surfaces and eventually suffer from the convergence problem. In this paper, we propose Generative Occupancy Fields (GOF), a novel model based on generative radiance fields that can learn compact object surfaces without impeding its training convergence. The key insight of GOF is a dedicated transition from the cumulative rendering in radiance fields to rendering with only the surface points as the learned surface gets more and more accurate. In this way, GOF combines the merits of two representations in a unified framework. In practice, the training-time transition of start from radiance fields and march to occupancy representations is achieved in GOF by gradually shrinking the sampling region in its rendering process from the entire volume to a minimal neighboring region around the surface. Through comprehensive experiments on multiple datasets, we demonstrate that GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces.

Demo

GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces.





Method Overview




Our method GOF is capable of learning 3D models only from a collection of unposed images. In contrast with generative radiance fields (GRAFs), GOF predicts occupancy values directly instead of volume densities. During training, the sampling interval will shrink gradually as the training goes. For inference, we can use cumulative rendering for rendering and alternatively render only with the surface points.

Qualitative comparison




Materials




Citation

@inproceedings{xu2021generative,
  title={Generative Occupancy Fields for 3D Surface-Aware Image Synthesis},
  author={Xu, Xudong and Pan, Xingang and Lin, Dahua and Dai, Bo},
  booktitle={Advances in Neural Information Processing Systems(NeurIPS)},
  year={2021}
}


Related Works


Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger. GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. NeurIPS 2020.
Eric Chan*, Marco Monteiro*, Petr Kellnhofer, Jiajun Wu, Gordon Wetzstein. pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis. CVPR 2021.
Xingang Pan, Xudong Xu, Chen Change Loy, Christian Theobalt, Bo Dai. A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis. NeurIPS 2021.
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, Wenping Wang. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeurIPS 2021.
Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman. Volume Rendering of Neural Implicit Surfaces. ArXiv 2021.