NeSF: Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes

1Google Research, 2University of Toronto
*Denotes equal contribution

Abstract

We present NeSF, a method for producing 3D semantic fields from posed RGB images alone. In place of classical 3D representations, our method builds on recent work in implicit neural scene representations wherein 3D structure is captured by point-wise functions. We leverage this methodology to recover 3D density fields upon which we then train a 3D semantic segmentation model supervised by posed 2D semantic maps. Despite being trained on 2D signals alone, our method is able to generate 3D-consistent semantic maps from novel camera poses and can be queried at arbitrary 3D points. Notably, NeSF is compatible with any method producing a density field, and its accuracy improves as the quality of the density field improves. Our empirical analysis demonstrates comparable quality to competitive 2D and 3D semantic segmentation baselines on complex, realistically-rendered synthetic scenes. Our method is the first to offer truly dense 3D scene segmentations requiring only 2D supervision for training, and does not require any semantic input for inference on novel scenes.

Video

Overview


Given a pre-trained NeRF model, we sample its volumetric density grid to obtain the 3D scene representation. This grid is converted to a semantic-feature grid by employing a fully convolutional volume-to-volume network thus allowing for geometric reasoning. The semantic-feature grid is in turn translated to semantic probability distributions using the volumetric rendering equation. Note the semantic 3D UNet is trained across all scenes in the TrainScenes set, though not explicitly depicted for the sake of simplicity. Additionally, note that NeSF is trained solely using 2D supervisory signals and that no segmentation maps are provided at test time.


BibTeX

@misc{vora2021nesf,
      title={NeSF: Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes}, 
      author={Suhani Vora and Noha Radwan and Klaus Greff and Henning Meyer and Kyle Genova and Mehdi S. M. Sajjadi and Etienne Pot and Andrea Tagliasacchi and Daniel Duckworth},
      year={2021},
      eprint={2111.13260},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}