Skip to content

antoniahalbig/leapvo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry

[CVPR 2024] The repository contains the official implementation of LEAP-VO. We aim to leverage temporal context with long-term point tracking to achieve motion estimation, occlusion handling, and track probability modeling.

LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
Weirong Chen, Le Chen, Rui Wang, Marc Pollefeys
CVPR 2024

[Paper] [Project Page]

Installation

Requirements

The code was tested on Ubuntu 20.04, PyTorch 1.12.0, CUDA 11.3 with 1 NVIDIA GPU (RTX A4000).

Clone the repo

git clone https://github.com/chiaki530/leapvo.git
cd leapvo 

Create a conda environment

conda env create -f environment.yml
conda activate leapvo

Install LEAP-VO package

wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
unzip eigen-3.4.0.zip -d thirdparty

pip install .

Alternative Installation

This fork additionally contains a docker setup. There is a start script docker/start.sh, which runs immediately when the docker container is created/run. For a personal project, this repository has an added API, which the start script runs. If you just want to run the leapvo project normally, comment out the related line in start.sh or remove the script from ENTRYPOINT all together.

First some necessary setup steps:

  • Run the following command:
    wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
    
  • Load model variables to the leapvo repository as described in the "Demo" Section below.

When the setup steps are done you can start the docker setup:

  • To build the project run:

    docker buildx build --platform=linux/amd64 -t leapvo-anaconda . --load
    
    • Building the project will take some time due to the size of the docker image ~28GB.
  • Afterwards you can run the project with an interactive shell:

    docker run -v <your-local-project-directory>:/workspace -it leapvo-anaconda /bin/bash
    
    • In order to avoid rebuilding after editing project files, this command uses "-v" to bind the local project-directory.

    • Every time a container is newly created it will create a wheel with pip install .. This code is already in the start script docker/start.sh and runs automatically. Creating this wheel can take up to ~250 seconds.

  • If you removed the start script docker/start.sh you need to manually run pip install . in the interactive shell.

After all this is done, you can run the following demo below in the interactive shell.

Demos

Our method requires an RGB video and camera intrinsics as input. We provide the model checkpoint and example data on Google Drive. Please download leap_kernel.pth and place it in the weights folder, and download samples and place them in the data folder.

The demo can be run using:

python main/eval.py \
  --config-path=../configs \
  --config-name=demo \                                  # config file
  data.imagedir=data/samples/sintel_market_5/frames \   # path to image directory or video
  data.calib=data/samples/sintel_market_5/calib.txt \   # calibration file
  data.savedir=logs/sintel_market_5 \                   # save directory
  data.name=sintel_market_5 \                           # scene name
  save_trajectory=true \                                # save trajectory in TUM format
  save_video=true \                                     # save video visualization
  save_plot=true                                        # save trajectory plot

Evaluations

We provide evaluation scripts for MPI-Sinel, TartanAir-Shibuya, and Replica.

MPI-Sintel

Follow MPI-Sintel and download it to the data folder. For evaluation, we also need to download the groundtruth camera pose data. The folder structure should look like

MPI-Sintel-complete
└── training
    ├── final
    └── camdata_left

Then run the evaluation script after setting the DATASET variable to custom location.

bash scripts/eval_sintel.sh

TartanAir-Shibuya

Follow TartanAir-Shibuya and download it to the data folder. Then run the evaluation script after setting the DATASET variable to custom location.

bash scripts/eval_shibuya.sh

Replica

Follow Semantic-NeRF and download the Replica dataset into data folder. Then run the evaluation script after setting the DATASET variable to custom location.

bash scripts/eval_replica.sh

Citations

If you find our repository useful, please consider citing our paper in your work:

@InProceedings{chen2024leap,
  title={LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry},
  author={Chen, Weirong and Chen, Le and Wang, Rui and Pollefeys, Marc},
  journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
}

Acknowledgement

We adapted some codes from some awesome repositories including CoTracker, DPVO, and ParticleSfM. We sincerely thank the authors for open-sourcing their work and follow the License of CoTracker, DPVO and ParticleSfM.

About

[CVPR'24] LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 68.2%
  • C++ 20.1%
  • Cuda 9.7%
  • Other 2.0%