Open In Github
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX
Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)
Requirements
- Check the requirements.txt file. Additionally, pafy and youtube-dl are required for youtube video inference.
- DrivingStereo dataset, ONLY for the
driving_sereo_test.py
script. Link: https://drivingstereo-dataset.github.io/
Installation
1
2
pip install -r requirements.txt
pip install pafy youtube-dl
ONNX model
Download the ONNX model from Google Drive and save it into the models folder.
Original Pytorch model
The Pytorch pretrained model was taken from the original repository.
Examples
- Image inference:
1
python image_depth_estimation.py
- Video inference:
1
python video_depth_estimation.py
- DrivingStereo dataset inference:
1
python driving_sereo_test.py
Inference video Example
References:
- MobileStereoNet model: https://github.com/cogsys-tuebingen/mobilestereonet
- PINTO0309’s model zoo: https://github.com/PINTO0309/PINTO_model_zoo
- PINTO0309’s model conversion tool: https://github.com/PINTO0309/openvino2tensorflow
- DrivingStereo dataset: https://drivingstereo-dataset.github.io/
- Original paper: https://arxiv.org/pdf/2108.09770.pdf