VR-NeRF brings high-fidelity walkable spaces to real-time virtual reality. Our “Eyeful Tower” multi-camera rig captures spaces with high image resolution and dynamic range that approach the limits of the human visual system.

Please note: These videos are encoded using HEVC with 10-bit HDR colors and are best viewed on a compatible display with HDR support, e.g. recent Apple devices.

Abstract

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. We extend instant neural graphics primitives with a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing, while carefully optimizing the trade-off between quality and speed. Our multi-GPU renderer enables high-fidelity volume rendering of our neural radiance field model at the full VR resolution of dual 2K×2K at 36 Hz on our custom demo machine. We demonstrate the quality of our results on our challenging high-fidelity datasets, and compare our method and datasets to existing baselines.

Data Capture

VR demo

BibTeX

      @InProceedings{VRNeRF,
        author    = {Linning Xu and
                     Vasu Agrawal and
                     William Laney and
                     Tony Garcia and
                     Aayush Bansal and
                     Changil Kim and
                     Rota Bulò, Samuel and
                     Lorenzo Porzi and
                     Peter Kontschieder and
                     Aljaž Božič and
                     Dahua Lin and
                     Michael Zollhöfer and
                     Christian Richardt},
        title     = {{VR-NeRF}: High-Fidelity Virtualized Walkable Spaces},
        booktitle = {SIGGRAPH Asia Conference Proceedings},
        year      = {2023},
        doi       = {10.1145/3610548.3618139},
        url       = {https://vr-nerf.github.io},
      }