This repository contains the code for two papers:
- Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering
- Neural Inverse Rendering from Propagating Light
The goal of Flash Cache is to recover the geometry, materials, and potentially unknown lighting of a scene from a set of conventional images. It works by modeling light transport using radiance caching, a technique that can accelerate physically-based rendering. It implements several techniques to improve speed and reduce bias in radiance caching.
Neural Inverse Rendering from Propagating Light, or InvProp is an extension of this idea, and of the paper Flying with Photons: Rendering Novel Views of Propagating Light, which models time-resolved light transport via time resolved radiance caching, and performs inverse rendering from ultrafast videos that capture light in flight.
To install all required dependences, run
bash install_environment.shTo use the dataset, download it from the Dropbox link below:
From the Dropbox folder, download:
- the
vignettefolder pulse.npy- any scene
.zipyou want to train on
Place these into your repository’s data/ folder so you end up with something like:
data/
vignette/
pulse.npy
<scene_name>.zip
Unzip the downloaded scene zip into data/:
unzip downloaded_zip.zipIn your config, verify the following entries point to the correct files/directories:
Config.calib_checkpoint→ path to yourvignette/(or the vignette checkpoint within it, depending on your config layout)Config.impulse_response→ path topulse.npyConfig.data_dir→ path to the unzipped scene directory
At a high level, this system works by:
- Training a ``Cache'' or NeRF of a scene, which gives an initial estimate of geometry, and radiance leaving every point.
- Training a physically-based ``Material model'', which predicts outgoing illumination by integrating the cache against a Disney-GGX BRDF.
To train both models simultaneously, run
bash scripts/train.sh --scene <scene_name> --stage material_light_from_scratch_resample --batch_size 4096 --render_chunk_size 1024
Intermediate images will be written, by default, to ~/checkpoints/yobo_results/synthetic/<scene_name>_<stage>.
Try running the above with the scene name set to hotdog. You should see results in ~/checkpoints/yobo_results/synthetic/hotdog_material_light_from_scratch_resample.
We train and evaluate Flash Cache on the following scenes from the TensoIR-synthetic dataset: hotdog, lego, armadillo, and ficus. We also train and evaluate on the following scenes from the open illumination dataset: obj_02_egg, obj_04_stone, obj_05_bird, obj_17_box, obj_26_pumpkin, obj_29_hat, obj_35_cup, obj_36_sponge, obj_42_banana, obj_48_bucket.
In order to perform evaluation for a specific scene, run:
bash scripts/eval.sh --scene <scene_name> --stage material_light_from_scratch_resample --render_chunk_size 1024 --render_repeats N
where the physically-based renderings are averaged N times.
We train and evaluate InvProp on the following scenes from our synthetic dataset: cornell, pots, peppers, and kitchen. We also train and evaluate on the following captured scenes: statue, spheres, globe, house.
In order to perform evaluation for a specific scene, run:
bash scripts/eval.sh --scene <scene_name> --stage material_light_from_scratch_resample --render_chunk_size 1024 --render_repeats N
where the physically-based renderings are averaged N times.
@inproceedings{attal2024flash,
title={Flash cache: Reducing bias in radiance cache based inverse rendering},
author={Attal, Benjamin and Verbin, Dor and Mildenhall, Ben and Hedman, Peter and Barron, Jonathan T and O’Toole, Matthew and Srinivasan, Pratul P},
booktitle={European Conference on Computer Vision},
pages={20--36},
year={2024},
organization={Springer}
}@inproceedings{malik2025neural,
title={Neural Inverse Rendering from Propagating Light},
author={Malik, Anagh and Attal, Benjamin and Xie, Andrew, and O’Toole, Matthew and Lindell, David},
booktitle={CVPR},
year={2025},
}@article{malik2024flying,
author = {Malik, Anagh and Juravsky, Noah and Po, Ryan and Wetzstein, Gordon and Kutulakos, Kiriakos N. and Lindell, David B.},
title = {Flying with Photons: Rendering Novel Views of Propagating Light},
journal = {ECCV},
year = {2024}
}