Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
podcast
Filter by Categories
ArcGIS Pro
GDAL
GeoJson
Map
Python
QGIS
Uncategorized

Collecting And Processing Aerial Imagery At Scale

Our guest today is Dr. Mike Bewley, who is the Senior Director, AI Systems at Nearmap. Mike studied engineering and physics at the University of Sydney before beginning work at a company which built implants for the hearing impaired. This prompted him to return to school where he acquired his PhD in robotics and machine learning. He entered the geospatial realm by applying this knowledge to an underwater autonomous vehicle survey, automating the image processing work required in the project. Today, he helps to engineer the huge data pipeline Nearmap has constructed, collecting, and processing petabytes of images of our earth and its contents. 

Industrial Scale Imagery as a Business

Imagery collection is a stage of the GIS process that has been taken for granted for some time. Commonly, consumers are limited to coarse government-provided satellite imagery, or must commission a UAV/UAS collection of the area they are interested in to obtain high quality data products that suit their business needs. This second option can be expensive, and if multiple collections are necessary with different providers, there remains the possibility of issues with data inconsistencies as far as alignment, or the ranges of the sensors being used.   

At the end of the day, what matters most for a consumer is that they are working with the highest quality data accessible to them. In order to be competitive in the market, one must devise a way to meet this criteria. 

There are really two objectives here, creating very high quality data that meets the need at hand, and making it accessible and attractive to the client

In order to focus on collecting the highest quality data, it is beneficial to make your collections as uniform as possible to allow optimization throughout processing. This makes it easier to identify outliers, and streamline post-processing raster workflows as you do not need to spend time recalibrating your system to match the nuances of the new imagery. This is especially important when employing machine learning models to extract information down the line, as consistency in inputs is key when it comes to effective training data.

In terms of creating attractive and accessible data products, we can look to the market to figure out where the need lies. For Nearmap, a large portion of their customers are insurance companies, or local governments. These industries have a need for high quality, detailed data in order to interpret impact on an individual scale. Satellite imagery is unable to provide that level of granularity, so the competition is commissioned UAS/UAV missions. Being attractive and accessible here means being cheaper and more reliable than these methods, which can be accomplished by scaling operations to serve large markets, and appeal to more customers.    

Once all this data is collected, it is sent through deep learning models which extract meaningful features. As capitalism demands, which features exactly are dictated by the needs of the customer. 

Aerial Imagery Collection at Scale

When operating an imagery business at country-level scales, efficiency is the name of the game. In order to optimize profits, it is necessary to maximize the usable deliverables, while minimizing costs of collection, also known as creating an economy of scale. Nearmap accomplishes this by using sensors that are 4x more efficient than others, and collecting near complete scans of their target markets multiple times a year. 

The more efficient sensors are key and enable planned, general purpose collections rather than following the traditional method of hiring planes for specific missions.

Adding onto this already more efficient process, Nearmap employs highly skilled meteorologists who can help determine the optimal times to collect imagery with their fleet. They can help plan for minimal cloud cover, and the best on the ground conditions for collection, such as early spring when the trees are bare to create better elevation products. 

Additionally, Nearmap only collects RGB imagery, not using any thermal or LiDAR scanners. This of course reduces costs in terms of purchasing the necessary sensors, and managing payload on aircraft, but this also saves resources in the processing stage. 

LiDAR data can be resource intensive to process, and frequently the same questions customers have about the space can be answered with traditional RGB imagery

Workflows have existed for almost 100 years to derive 3D information from 2D data, so RGB can get the job done. Structure from motion technology allows 2D images to be utilized to understand the 3D element of space, creating DSMs and DTMs, so it is not entirely necessary to collect LiDAR. 

Scaling Artificial Intelligence and Object Detection 

When a company like Nearmap has access to seemingly endless amounts of imagery, it is the next most logical thing to turn this into useful and profitable spatial data products. Considering they are working with very high resolution data, 5-7cm, there are few limits on what can be done, especially with machine learning and deep learning in their toolbox. 

Using artificial intelligence workflows, data can be turned into more meaningful data. The nature of having consistent and high resolution data makes for the perfect combination of services. Nearmap has access to great amounts of inputs for training data, and their models will be run on the same data in the future, increasing the accuracy of the models. 

Considering the resolution of the images, they can even extract information on textures, as these really come through as slight variations in color in the images. 

The secret to improving deep learning models is to tweak the training data, not the algorithm

Nearmap creates 35 different feature classes all from the same source imagery that they collect to provide their customers. In order to maintain being as hands-off as possible once the data collection, to processing, to object detection workflow has been automated, it is important to follow very strict guidelines when labeling training data. 

Human annotators are highly trained and are provided the spatial ontology for each of the feature classes they are labeling for to ensure consistency. 

Semantic segmentation is the most precise of the AI labeling methods, as it assigns a value to every pixel in the image. 

 A process called multi-labeling can be employed which allows a pixel that may be known to a human to represent two things to be considered the same way by the model

Essentially, one global model exists, but the output products exist uniquely from it and each other. For example, in a scenario where a tree limb hangs over a swimming pool, that limb represents a tree, but it is equally true that that space is also a swimming pool. Using multi-labeling, the unique tree and pool vector feature classes distributed to customers will both claim that pixel in their associated footprints, giving the most realistic real world representation of the space. 

Artificial intelligence workflows have become more and more popular in the geospatial industry in recent years, and will continue to prove their value and drive future innovation. Combined with the huge leaps forward in how we collect imagery, the future of GIS will likely see even further adaptation and marriage of the technologies. 

About the Author
I'm Daniel O'Donohue, the voice and creator behind The MapScaping Podcast ( A podcast for the geospatial community ). With a professional background as a geospatial specialist, I've spent years harnessing the power of spatial to unravel the complexities of our world, one layer at a time.