Stephen Mather, drone-software extraordinaire, and co-founder of the OpenDroneMap ecosystem. Stephen has always had a passion for where biology and geography meet. In university, he gained the GIS knowledge that would eventually lead to supporting the creation of OpenDroneMap during his time with Cleveland MetroParks.
Since its creation, the software’s range has expanded, as well as Stephen’s. Today, he largely focuses on expanding the capabilities of the project, while also maintaining a role as the Systems Administrator for Oberlin College.
What Is OpenDroneMap?
OpenDroneMap is an open-source application with the goal of making it more accessible to process, view, and manipulate aerially collected imagery cohesively.
In the field, a drone will collect a large series of images over an area. OpenDroneMap takes on the next step of creating usable GIS products from this imagery. Some examples of the products you can create are 3D meshes, continuous orthophotos, and elevation and point cloud models.
From the beginning OpenDroneMap is and remains a command-line based application. In order to advance that core mission of accessibility, it has since been wrapped in a more user-friendly GUI, allowing easier adoption for those of us that quiver at the idea of a Linux deployment.
The application is largely browser-based. Some of the components that make this possible are NodeODM, WebODM, and ODM itself. All of these are dependent on the Docker toolbox, and the use of Python 3. In terms of recommended hardware, for a decent sized project (more than 150 quality images) you will want at least 16GB of RAM and a nice chunk of free disk space in order to give the application the power it needs. The good news when it comes to hardware requirements is that currently OpenDroneMap does not need to leverage a GPU, so if you are just operating off a CPU, you are not at a disadvantage. Further details on hardware recommendations, and step-by-step guidance on installing OpenDroneMap are available in their documentation.
The choice of going browser-based for such a heavy hitting program may seem questionable on the surface, but the deeper you go, the more it just makes sense. Being tied to a desktop creates barriers by limiting both physical, and digital movement. Physically, drone imagery collection is still largely a process that takes place in the field. A mobile deployment allows pilots to take their software with them as they travel, and supervise ongoing projects to monitor and tweak them as the need arises. Digitally, the current OpenDroneMap infrastructure enables the ability to share models with others, as easily as via a link. This level of mobility is unmatched in purely desktop-based applications.
Getting Started in Photogrammetry
Drones are flashy, fun, and gradually becoming more attainable for the Average GI(S) Joe.
Photogrammetry, however, is anything but new technology. Images have been collected and stitched together since the days of hot air balloons being considered viable forms of transportation. Now, while the hardware has gotten much more compact, it is still quite expensive and provides one of the main barriers of entry for new pilots.
If you are committed to taking the plunge, do your research before purchasing a drone to make sure that the camera/sensor used supports the type and quality of imagery that you need. The most important consideration is that the drone/camera collects GPS information for each image. Manufacturers widely vary the formats used for storing the imagery captured, so it is crucial to make sure your camera has the right specs. If you are planning to capture multispectral imagery (most common for agricultural work), be extra careful selecting a sensor, as multispectral sensors are the most limited in their support. Currently, OpenDroneMap only supports Sentera 6X and MicaSense RedEdge-MX and Altum, but they are working to expand their offerings.
Growing the available selection is largely dependent on receiving sample datasets, so feel free to share the wealth with the team!
Once you have the drone, check your local or national regulations on how and where you can fly it. In the United States, you will want to check the Federal Aviation Administration’s website to learn more. In Europe, the EU provides guidance on registration and regulation.
After navigating the bureaucratic red tape, you are almost ready to fly! You will want to seek out a flight planning app, and supporting remote control. Use of a flight planning app will drastically improve the ease of use for collection of higher quality products, that are more likely to be successful when you return to the office for processing. To further increase the accuracy of your products, consider setting ground control points (GCPs). Flight planning will allow you to easily set your flight path, overlap, and altitude. We will see in a bit why finely tuning these knobs can be so important, and they will not be the same in every situation.
How Does Photogrammetry Work?
Put most simply, photogrammetry is the process of stitching together images into one large fabric of images based on their overlap. The software completes this process in two general stages. First is the matching stage. For this step, the software will filter out images that are least representative of the rest of the dataset. For example, if you have 1,000 overhead pictures of your neighborhood, and 2 up close of your neighbor wielding a baseball bat, it will know to remove those images, and you will know to limit your flight plan and altitude near his house.
The next stage your images will go through is blending. The software will compute common features between your images in their areas of overlap, these are called tie points. The tie points are used to calculate how your images will be stitched together. Now that the software knows which images are related, it can start to decide how to relate them. Blending will aim to select the best possible source image to represent each pixel. Information embedded in the images (metadata) is used here, such as the distance of the camera from the ground, camera angle, and direction.
Matching and blending are not too difficult to conceptualize to a human, but when this process is run through software, it is a bit of a black box as to how decisions are being made. Some of the packages supporting OpenDroneMaps capabilities are OpenSIPS, and OpenSfM. Creation of point cloud products is also supported by integration of PDAL Entwine.
The purpose of OpenDroneMap is to create your GIS products. Once you are ready to analyze, you will be best off moving your outputs to your preferred GIS software for further processing.
Planning Inputs for the Best Outputs
Running your images through OpenDroneMap, you can expect to get DTMs, DSMs, point clouds, orthophotos, and 3D textured meshes. Will you get a perfect product every time? No. Especially while you are still learning what works, and what doesn’t.
One of the most important things to consider when building your flight path is creating overlap. If there is not suitable overlap between the images, the software will not be able to build the tie points it needs to understand that image’s place in space. Shared and discernible features to use for creating these ties is essential if you want to avoid holes in your products.
But, I flew at 90% overlap for my whole area, why is my output patchy?
Well, even if you do everything right, some things just are not good candidates for capturing with photogrammetry. These things are:
- Shiny, reflectives things. The shimmer and glare does not provide any feature to match on. To workaround this, you can try different angles, or flying at a different time of day to avoid the sun’s attempted sabotage.
- Repeating patterns. While human’s love identifying patterns, computers hate them. They are perfectionists. When every image looks the same, it is impossible for the computer to place them in space. To workaround this, fly at different altitudes until you find a scale at which the pattern no longer repeats. This has a side benefit of giving you more levels of detail to work with later.
- Featureless areas. If there are no features to use as tie points, you are not going to get tie points. There really is no workaround for this limitation except finding somewhere more interesting to fly.
Reflecting on these limits, there is one infamous element that hits all 3. Water. Do not attempt to collect imagery to reconstruct a water body, you will just be wasting everyone’s time and battery packs. If water will be present in your study area, limit how much you include it in your flight plan. Seriously, it will ruin everything.
Something else to consider when planning a flight is temporal resolution. Is there movement in your image that will confuse the calculation of tie points (i.e. cars)? If you are splitting your flights over multiple days, are you flying at a similar time of day, in similar weather conditions? Variations in sunlight/cloud cover can cause issues in the blending of your final output. Try to be as consistent as possible across image collections to avoid compromising on quality.
What’s Next For OpenDroneMap?
If this all was not cutting edge enough for you, don’t worry, there is always more in the pipeline with open-source projects. 3D reconstruction has captured the imagination of engineers everywhere, from Google, to Facebook. Apple has even integrated a LiDAR scanner in its most recent phone.
In the future, OpenDroneMap plans to further enhance its capabilities for working with point clouds and LiDAR data to help with the infamous processing bottleneck that has formed as such data becomes more widely available. As this data becomes more available, how we collect it is also evolving. In 5-10 years, we may begin to see the fruits of replacing the spinning mirrors used for collection with arrays of photon capturers. What does this mean? Essentially, it results in a camera/LiDAR hybrid capture, hypothetically allowing you to capture the geometry and texture together.
To summarize, drones are cool, OpenDroneMap is cool. When you put them together, the results are awesome.