Skip to content

benattal/neural-radiance-caching

Repository files navigation

Table of contents



Introduction

This repository contains the code for two papers:

  1. Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering
  2. Neural Inverse Rendering from Propagating Light

Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering

The goal of Flash Cache is to recover the geometry, materials, and potentially unknown lighting of a scene from a set of conventional images. It works by modeling light transport using radiance caching, a technique that can accelerate physically-based rendering. It implements several techniques to improve speed and reduce bias in radiance caching.

Neural Inverse Rendering from Propagating Light

Neural Inverse Rendering from Propagating Light, or InvProp is an extension of this idea, and of the paper Flying with Photons: Rendering Novel Views of Propagating Light, which models time-resolved light transport via time resolved radiance caching, and performs inverse rendering from ultrafast videos that capture light in flight.

Installation

To install all required dependences, run

bash install_environment.sh

Datasets

To use the dataset, download it from the Dropbox link below:

What to download and where to put it

From the Dropbox folder, download:

  • the vignette folder
  • pulse.npy
  • any scene .zip you want to train on

Place these into your repository’s data/ folder so you end up with something like:

data/
  vignette/
  pulse.npy
  <scene_name>.zip

Unzip the scene

Unzip the downloaded scene zip into data/:

unzip downloaded_zip.zip

Verify config paths

In your config, verify the following entries point to the correct files/directories:

  • Config.calib_checkpoint → path to your vignette/ (or the vignette checkpoint within it, depending on your config layout)
  • Config.impulse_response → path to pulse.npy
  • Config.data_dir → path to the unzipped scene directory

Quick Start

At a high level, this system works by:

  1. Training a ``Cache'' or NeRF of a scene, which gives an initial estimate of geometry, and radiance leaving every point.
  2. Training a physically-based ``Material model'', which predicts outgoing illumination by integrating the cache against a Disney-GGX BRDF.

To train both models simultaneously, run

bash scripts/train.sh --scene <scene_name> --stage material_light_from_scratch_resample --batch_size 4096 --render_chunk_size 1024

Intermediate images will be written, by default, to ~/checkpoints/yobo_results/synthetic/<scene_name>_<stage>.

Try running the above with the scene name set to hotdog. You should see results in ~/checkpoints/yobo_results/synthetic/hotdog_material_light_from_scratch_resample.

Running Flash Cache

We train and evaluate Flash Cache on the following scenes from the TensoIR-synthetic dataset: hotdog, lego, armadillo, and ficus. We also train and evaluate on the following scenes from the open illumination dataset: obj_02_egg, obj_04_stone, obj_05_bird, obj_17_box, obj_26_pumpkin, obj_29_hat, obj_35_cup, obj_36_sponge, obj_42_banana, obj_48_bucket.

In order to perform evaluation for a specific scene, run:

bash scripts/eval.sh --scene <scene_name> --stage material_light_from_scratch_resample --render_chunk_size 1024 --render_repeats N

where the physically-based renderings are averaged N times.

Running Neural Inverse Rendering from Propagating Light

We train and evaluate InvProp on the following scenes from our synthetic dataset: cornell, pots, peppers, and kitchen. We also train and evaluate on the following captured scenes: statue, spheres, globe, house.

In order to perform evaluation for a specific scene, run:

bash scripts/eval.sh --scene <scene_name> --stage material_light_from_scratch_resample --render_chunk_size 1024 --render_repeats N

where the physically-based renderings are averaged N times.

Citation

@inproceedings{attal2024flash,
  title={Flash cache: Reducing bias in radiance cache based inverse rendering},
  author={Attal, Benjamin and Verbin, Dor and Mildenhall, Ben and Hedman, Peter and Barron, Jonathan T and O’Toole, Matthew and Srinivasan, Pratul P},
  booktitle={European Conference on Computer Vision},
  pages={20--36},
  year={2024},
  organization={Springer}
}
@inproceedings{malik2025neural,
  title={Neural Inverse Rendering from Propagating Light},
  author={Malik, Anagh and Attal, Benjamin and Xie, Andrew, and O’Toole, Matthew and Lindell, David},
  booktitle={CVPR},
  year={2025},
}
@article{malik2024flying,
  author = {Malik, Anagh and Juravsky, Noah and Po, Ryan and Wetzstein, Gordon and Kutulakos, Kiriakos N. and Lindell, David B.},
  title = {Flying with Photons: Rendering Novel Views of Propagating Light},
  journal = {ECCV},
  year = {2024}
}

Contributors / co-authors

About

Code for projects on inverse rendering via radiance caching, at ECCV 2024 and CVPR 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors