tags: docker, photogrammetry
Originally Published: 2015-01-13

In a previous post I outlined a workflow for using VisualSFM for photogrammetry. In this post, I’d like to highlight some recent work I’ve done for creating an isolated, replicable VisualSFM environment using Docker.

Using a Local Docker Host

First, follow the Docker installation guide for your platform (though on a Mac, you may prefer to follow this Homebrew-based guide).

After you’ve set up docker and your shell’s environment variables (using e.g. $(boot2docker shellinit)), run:

docker run -i -t ryanfb/visualsfm /bin/bash

Which should download my image from Docker Hub and drop you into a shell with a VisualSFM command (as well as youtube-dl, and ffmpeg replacement avconv).1

Using an AWS GPU-Enabled Docker Host

Because my VisualSFM image builds on work by Traun Leyden to build a CUDA-enabled Ubuntu install with Docker, you can run the cuda tag/branch of it in a GPU-enabled environment to take advantage of SiftGPU during the SIFT feature recognition stage of VisualSFM processing (with no GPU/CUDA support detected, it will fall back to the CPU-based VLFeat SIFT implementation).2

This means you can also use his instructions and AMI for building a CUDA-enabled AWS EC2 instance, and then run my VisualSFM image inside it.

To do this you’ll need to:

You should now be inside a docker container in your EC2 instance, with a VisualSFM command which will use SiftGPU for feature recognition.


  1. Note that if you’re getting segmentation faults during reconstruction when you try to run VisualSFM inside Docker, you may need to increase your boot2docker memory allocations

  2. As of this writing on a g2.2xlarge instance processing frames from the example in the previous post, SiftGPU takes approximately .05-.15 sec/frame vs. 1-2 sec/frame with the CPU-based VLFeat implementation.