Skip to content

Miaosheng1/EDUS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EDUS: Efficient Depth-Guided Urban View Synthesis (ECCV 2024)

paper

Sheng Miao*, Jiaxin Huang*, Dongfeng Bai, Weichao Qiu, Bingbing Liu, Andreas Geiger and Yiyi Liao

Our project page can be seen here.

📖 Datasets

We evaluate our model on KITTI-360 and Waymo. Here we show the structure of a test dataset as follow. You can download validation data directly from 🤗 Hugging Face.

  • We provide the preprocessed data for inference on KITTI-360, which contains 5 validation scenes.
  • We exploit the Metric3d for metric depth predictions and hierarchical-multi-scale-attention for sky mask segmentation.
  • We pre-voxelize the accmulated global pointcloud xxx.ply to numpy array in voxel folder. We set voxel size [0.2m,0.2m,0.2m] as described in the main paper.

The dataset should have a structure as follows:

├── $PATH_TO_YOUR_DATASET
    ├── $SCENE_0
        ├── depth
        ├── semantic
        ├── mask
        ├── voxel
        ├── *.png
        ...
        ├── transfroms.json
    ...
    ├── SCENE_N
        ├── depth
        ├── semantic
        ├── mask
        ├── voxel
        ├── *.png
        ...
        ├── transfroms.json

🏠 Installation

Our EDUS is built on nerfstudio. You can follow the nerfstudio webpage to install our code.

Create environment

conda create --name EDUS -y python=3.8
conda activate EDUS
pip install --upgrade pip

Dependencies

Install PyTorch with CUDA (this repo has been tested with CUDA 11.8).

pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

After pytorch, install tiny-cuda-nn:

pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Installing EDUS

Install EDUS form source code

git clone https://github.com/XDimLab/EDUS.git
cd EDUS
pip install --upgrade pip setuptools
pip install -e .
Troubleshooting
  • When installing the inplace-abn library, it does not check your CUDA version. If you encounter the following error while running:
libtorch_cuda_cu.so: cannot open shared object file: No such file or directory

Please reinstall inplace-abn to match with your CUDA version.

pip uninstall inplace-abn
rm -r ~/.cache/pip
pip install inplace-abn

📈 Evaluation & Checkpoint

We provide the pretrained model trained on KITTI-360 and Waymo and you can download the pre-trained models from here. We recommend the checkpoint pretrain_kitti360.pth which is trained from KITTI-360.

Place the downloaded checkpoints in checkpoint folder in order to test it later.

Feed-forward Inference

We provide the different sparsity levels (50%, 80%) to validate our methods, where a higher drop rate corresponds to a more sparsely populated set of reference images. Replace $Data_Dir$ with your data path.

python scripts/infere_zeroshot.py edus
 --config_file config/test_GVS_nerf.yaml 
zeronpt-data 
--data $Data_Dir$ 
--drop50=True 
--include_depth_map=True

Replace the --drop50=True with --drop80=True to inference on Drop80 setting.

📋 Citation

If our work is useful for your research, please give me a star and consider citing:

@inproceedings{miao2025efficient,
  title={Efficient Depth-Guided Urban View Synthesis},
  author={Miao, Sheng and Huang, Jiaxin and Bai, Dongfeng and Qiu, Weichao and Liu, Bingbing and Geiger, Andreas and Liao, Yiyi},
  booktitle={European Conference on Computer Vision},
  pages={90--107},
  year={2025},
  organization={Springer}
}

✨ Acknowledgement

About

official code of Efficient Depth-Guided Urban View Synthesis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages