In a previous post I outlined a workflow for using VisualSFM for photogrammetry. In this post, I’d like to highlight some recent work I’ve done for creating an isolated, replicable VisualSFM environment using Docker.
Using a Local Docker Host
First, follow the Docker installation guide for your platform (though on a Mac, you may prefer to follow this Homebrew-based guide).
After you’ve set up docker
and your shell’s environment variables (using e.g. $(boot2docker shellinit)
), run:
docker run -i -t ryanfb/visualsfm /bin/bash
Which should download my image from Docker Hub and drop you into a shell with a VisualSFM
command (as well as youtube-dl
, and ffmpeg
replacement avconv
).1
Using an AWS GPU-Enabled Docker Host
Because my VisualSFM image builds on work by Traun Leyden to build a CUDA-enabled Ubuntu install with Docker, you can run the cuda
tag/branch of it in a GPU-enabled environment to take advantage of SiftGPU during the SIFT feature recognition stage of VisualSFM processing (with no GPU/CUDA support detected, it will fall back to the CPU-based VLFeat SIFT implementation).2
This means you can also use his instructions and AMI for building a CUDA-enabled AWS EC2 instance, and then run my VisualSFM image inside it.
To do this you’ll need to:
- Launch an EC2 instance with instance type
g2.2xlarge
, community AMIami-2cbf3e44
, and 20+ GB of storage - Connect to your EC2 instance
- Install
docker
inside your EC2 instance:sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker
- Run a GPU-enabled VisualSFM docker image:
- Build the CUDA samples and run
deviceQuery
inside your Docker host (this seems to be necessary to init the nvidia devices in/dev
):cd ~/nvidia_installers
sudo ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery
- Find your nvidia devices with:
ls -la /dev | grep nvidia
- Set these as
--device
arguments in a variable you’ll pass to thedocker run
command:export DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
sudo docker run -ti $DOCKER_NVIDIA_DEVICES ryanfb/visualsfm:cuda /bin/bash
(note the:cuda
tag specifier here)
- Follow the instructions here for more explanation and to verify CUDA access inside the container
- Build the CUDA samples and run
You should now be inside a docker container in your EC2 instance, with a VisualSFM
command which will use SiftGPU for feature recognition.
Footnotes
-
Note that if you’re getting segmentation faults during reconstruction when you try to run VisualSFM inside Docker, you may need to increase your boot2docker memory allocations. ↩
-
As of this writing on a
g2.2xlarge
instance processing frames from the example in the previous post, SiftGPU takes approximately .05-.15 sec/frame vs. 1-2 sec/frame with the CPU-based VLFeat implementation. ↩