[CVPR 2024] The repository contains the official implementation of LEAP-VO. We aim to leverage temporal context with long-term point tracking to achieve motion estimation, occlusion handling, and track probability modeling.
LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
Weirong Chen, Le Chen, Rui Wang, Marc Pollefeys
CVPR 2024
[Paper] [Project Page]
The code was tested on Ubuntu 20.04, PyTorch 1.12.0, CUDA 11.3 with 1 NVIDIA GPU (RTX A4000).
git clone https://github.com/chiaki530/leapvo.git
cd leapvo
conda env create -f environment.yml
conda activate leapvo
wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
unzip eigen-3.4.0.zip -d thirdparty
pip install .
This fork additionally contains a docker setup.
There is a start script docker/start.sh, which runs immediately when the docker container is created/run. For a personal project, this repository has an added API, which the start script runs. If you just want to run the leapvo project normally, comment out the related line in start.sh or remove the script from ENTRYPOINT all together.
First some necessary setup steps:
- Run the following command:
wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip - Load model variables to the leapvo repository as described in the "Demo" Section below.
When the setup steps are done you can start the docker setup:
-
To build the project run:
docker buildx build --platform=linux/amd64 -t leapvo-anaconda . --load- Building the project will take some time due to the size of the docker image ~28GB.
-
Afterwards you can run the project with an interactive shell:
docker run -v <your-local-project-directory>:/workspace -it leapvo-anaconda /bin/bash-
In order to avoid rebuilding after editing project files, this command uses "-v" to bind the local project-directory.
-
Every time a container is newly created it will create a wheel with
pip install .. This code is already in the start scriptdocker/start.shand runs automatically. Creating this wheel can take up to ~250 seconds.
-
-
If you removed the start script
docker/start.shyou need to manually runpip install .in the interactive shell.
After all this is done, you can run the following demo below in the interactive shell.
Our method requires an RGB video and camera intrinsics as input. We provide the model checkpoint and example data on Google Drive. Please download leap_kernel.pth and place it in the weights folder, and download samples and place them in the data folder.
The demo can be run using:
python main/eval.py \
--config-path=../configs \
--config-name=demo \ # config file
data.imagedir=data/samples/sintel_market_5/frames \ # path to image directory or video
data.calib=data/samples/sintel_market_5/calib.txt \ # calibration file
data.savedir=logs/sintel_market_5 \ # save directory
data.name=sintel_market_5 \ # scene name
save_trajectory=true \ # save trajectory in TUM format
save_video=true \ # save video visualization
save_plot=true # save trajectory plot
We provide evaluation scripts for MPI-Sinel, TartanAir-Shibuya, and Replica.
Follow MPI-Sintel and download it to the data folder. For evaluation, we also need to download the groundtruth camera pose data. The folder structure should look like
MPI-Sintel-complete
└── training
├── final
└── camdata_left
Then run the evaluation script after setting the DATASET variable to custom location.
bash scripts/eval_sintel.sh
Follow TartanAir-Shibuya and download it to the data folder. Then run the evaluation script after setting the DATASET variable to custom location.
bash scripts/eval_shibuya.sh
Follow Semantic-NeRF and download the Replica dataset into data folder. Then run the evaluation script after setting the DATASET variable to custom location.
bash scripts/eval_replica.sh
If you find our repository useful, please consider citing our paper in your work:
@InProceedings{chen2024leap,
title={LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry},
author={Chen, Weirong and Chen, Le and Wang, Rui and Pollefeys, Marc},
journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
We adapted some codes from some awesome repositories including CoTracker, DPVO, and ParticleSfM. We sincerely thank the authors for open-sourcing their work and follow the License of CoTracker, DPVO and ParticleSfM.