Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
podcast
Filter by Categories
ArcGIS Pro
GDAL
GeoJson
Map
Python
QGIS
Uncategorized

Aerial Imagery: The State Of The Art

The Importance of AI in Aerial Imagery

This episode is about the state of the art in how we approach aerial imagery today. From the equipment used to capture the imagery, and how it is processed, to the business model of capturing aerial imagery and what consumers are doing with the data. The guest is from Nearmap, a company that has been doing aerial imagery for over 15 years now.

About The Guest

Michael Bewley is the Senior Director of AI Systems at Nearmap. He runs a team of machine learning engineers and data scientists who use AI (Artificial Intelligence) to turn aerial imagery captures into useful insights and information. In this episode, he shares the state-of-the-art technology that is used at Nearmap to capture and process aerial imagery.

What Is The State Of The Art In Aerial Imagery Today?

With over 15 years in the aerial imagery industry, Nearmap has developed some of the best technologies for capturing and processing aerial imagery. A quick look at the technology they use gives clues to the latest developments in the world of aerial imagery.

Nearmap has adopted a highly automated and systemic approach in how they do aerial imagery, with their most recent development featuring the state-of-the-art HyperCamera3 (HC3) that was launched in 2022.

Designed in an upgrade of specs, the HC3 is efficient and captures high-quality imagery, at a larger scale, without increasing costs. From the imagery, one can clearly identify corrugated roofs and even see grid patterns in solar panels. But the imagery from the HC3 is not only limited to RGB channels, the camera also captures NIR (Near Infrared). With this addition, NIR calculations such as NDVI can be leveraged in not only recognizing the presence of vegetation in the imagery but also in knowing the vegetation’s health status.

What Is The Importance Of AI In Aerial Imagery?

With the amount of imagery captured on the rise and the need to process data fast, relying on human labor to identify features in aerial imagery is impractical. Trained to recognize features in RGB images as a human would, AI models are used to quickly recognize the features present in an aerial image. Nearmap is now at its 5th generation AI system that can recognize over 70 features in HC3 imagery. Featuring a deep learning model with about 78 layers, they have an elegant, state-of-the-art, single model – many outputs solution.

The Importance of Accurate Labelling When Training Models

Models are trained to be able to consistently recognize what a human can recognize from an RGB image. The state of the art for training models is that it is possible to reach that human performance with the right labeling and clean data. Even when using automated methods to train models, the results should still be verified by humans (human expert labeling) because if the model gets it wrong, it is not only a problem for your model training but also for your reporting of how accurate it is.

What Is The Business Model In Aerial Imagery?

Consumers of aerial imagery can be broadly split into two groups – those who consume raw imagery directly and those who are more interested in final map products. Imagery as a service customers use aerial imagery directly in a map browser tool or use APIs to pull the imagery into their own custom applications. On the other hand, there are consumers who prefer a more final product either because they lack the in-house capability to handle raw imagery, they don’t have the time to process it themselves, or it is more cost-effective for them to consume a final product. Vector maps as a service for customers who consume final product maps that have been prepared for them.

Data Fusion and Its Challenges in Machine Learning Datasets

Fusing datasets from different sources is a common practice among providers of machine learning data sets. But this is usually, a problem when trying to examine the truthfulness of the results by comparing the imagery to another source of truth i.e. a satellite image of the same location.

Since the data is a mix of different sources, there is often no way to go back in through the stack and prove where the data actually came from. With concerns about fake imagery that can easily be manufactured, this remains a big challenge for companies who rely on different sources for their machine learning datasets. Additionally, the quality of the final results may vary depending on whether there are errors from either of the sources one is relying on.

What Is The Importance Of Post Disaster AI?

Post-catastrophe imagery is captured just after a disaster such as a fire or a hurricane event. The imagery is then quickly processed by AI models to identify various types of damage that may have occurred. Post-disaster AI is powerful in supporting rapid response operations by governments or charity organizations. Insurers can also leverage the results to know where the claims are likely to be.

Post-disaster AI is not only useful in post-disaster response but can be used as a predictive model as well. A comparison between post-catastrophe imagery of an area and its imagery before the disaster reveals how different things have been affected by a disaster. This information can be used for predictive purposes of what may happen if a particular disaster hits an area.

How to Promote Transparency in Machine Learning Performance

Models have varying performances on different datasets. One of the most challenging parts for machine learning providers is the ability to choose an arbitrary location for running tests. Cherry-picking the best results of a model and showing it to a customer can easily mislead them to infer that is how the model performs everywhere – which is not the case.

To have a clearer picture of a model’s performance on a certain dataset, it is important to use samples that are meaningful for a particular use case. For instance, if it is a local government area, then use samples from within that area. Choosing random locations from within an area of interest promotes transparency in how a model can be expected to perform for that dataset.

The Benefit of Long-Term Technology Development

Building bespoke solutions for customer problems is beneficial in solving specific needs. But problems arise when you want to solve another similar but different problem. In most instances, you may have to rebuild your whole solution to suit the other problem.

However, it would be more beneficial to adopt a long-term technology development initiative, which focuses on a foundational outline of how you can efficiently provide a certain solution at scale. With that foundation in place, you can easily stack in new layers to serve all sorts of different customers without having to rebuild the whole stack.

Stratospheric Balloons As Remote Sensing Platforms

About the Author
I'm Daniel O'Donohue, the voice and creator behind The MapScaping Podcast ( A podcast for the geospatial community ). With a professional background as a geospatial specialist, I've spent years harnessing the power of spatial to unravel the complexities of our world, one layer at a time.