We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. We extend instant neural graphics primitives with a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing, while carefully optimizing the trade-off between quality and speed. Our multi-GPU renderer enables high-fidelity volume rendering of our neural radiance field model at the full VR resolution of dual 2K×2K at 36 Hz on our custom demo machine. We demonstrate the quality of our results on our challenging high-fidelity datasets, and compare our method and datasets to existing baselines.
@InProceedings{VRNeRF, author = {Linning Xu and Vasu Agrawal and William Laney and Tony Garcia and Aayush Bansal and Changil Kim and Rota Bulò, Samuel and Lorenzo Porzi and Peter Kontschieder and Aljaž Božič and Dahua Lin and Michael Zollhöfer and Christian Richardt}, title = {{VR-NeRF}: High-Fidelity Virtualized Walkable Spaces}, booktitle = {SIGGRAPH Asia Conference Proceedings}, year = {2023}, doi = {10.1145/3610548.3618139}, url = {https://vr-nerf.github.io}, }