Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
podcast
Filter by Categories
ArcGIS Pro
GDAL
GeoJson
Map
Python
QGIS
Uncategorized

Computer Vision and GeoAI

Computer Vision Vs. AI In Earth Observation

In this episode, the discussion is aimed at an increased understanding of the differences between computer vision and the AI that is used in the Earth Observation world.

About the Guest

Jordi Inglada is the Senior Expert Scientist for Machine Learning and Earth Observation at the French Space Agency (CNES). His main responsibility involves developing algorithms to extract useful information from satellite image time series for applications such as land cover mapping, and geophysical variable extraction, among others. 

Jordi has a background in signal processing, image processing, and physical modeling coupled with over 20 years’ work experience uniquely equips him to apply machine learning in Earth observation.

Computer Vision Vs. AI in Earth Observation

Computer vision and AI in Earth observation might seem similar, but they carry essential differences. In computer vision, the goal is semantic interpretation – recognizing faces or objects in an image using machine learning algorithms.

The accurate location of the pixels in space or data in time is not essential. Conversely, AI in Earth observation hinges on physical measures accurately located in space and time.

In essence, computer vision is often data-driven, leaning heavily on learning from the data to interpret and understand images. On the other hand, Earth observation is driven by a deep understanding of physical characteristics.

For instance, Earth observation uses established models and equations describing light interaction with vegetation or light’s travel through the atmosphere. This knowledge, combined with an understanding of the sensor’s operations, allows researchers to extract accurate information from the images of observed surfaces.

There is also a difference in reflectance measurement in computer vision and in Earth observation. While RGB imagery in computer vision involves a form of light reflectance measurement, the focus is not on the accurate measurement of the level of light across the three channels. 

On the contrary, Earth observation requires more accurate light measurements across various spectral bands. Remote sensing satellites like Landsat or Sentinel-2 provide precise measurements not only in the RGB spectrum but in the infrared as well. Obtaining accurate measures from these bands is necessary for the interpretation of physical phenomena occurring on the Earth’s surface.

Is It Possible to Use Computer Vision Algorithms in Earth Observation?

Although AI application in Earth observation borrows from computer vision technology, the transition from computer vision to earth observation is not as straightforward as it might seem. While some applications might find success in employing traditional computer vision algorithms, for instance, identifying cars in a parking lot or mapping buildings in a city, they might not serve the more intricate applications of remote sensing.

In remote sensing, the focus is often on monitoring physical magnitudes that are not directly observable, such as forest biomass for climate studies. These physical magnitudes cannot be observed directly but rather through reflectance in the infrared spectrum or via radar images in the microwave spectrum. The application of physical models is required to make the connection between the target magnitude and what the sensors observe.

It is also difficult to apply computer vision algorithms to remote sensing due to a difference in time series. In computer vision, time series data are typically in the form of video content, rich with numerous timestamps. In contrast, remote sensing operates on a different, less dense timescale, with data capture events typically happening daily, weekly, or perhaps every ten days. This difference in scale necessitates the development of bespoke algorithms to effectively handle remote sensing data.

Here is a table illustrating some of the key differences between traditional computer vision and computer vision applied to Earth observation data:

Traditional Computer VisionComputer Vision in Earth Observation
Primary GoalRecognizing patterns, objects, and faces in images.Interpreting physical measures accurately located in space and time.
Data TypeTypically uses RGB images.Uses multi-spectral and hyper-spectral imagery.
FocusSemantic interpretation.Deep understanding of physical characteristics.
Reflectance MeasurementNot focused on accurate measurement of light across channels.Requires accurate light measurements across various spectral bands.
ModelsPrimarily data-driven, learns from the data to interpret and understand images.Utilizes established models and equations describing physical properties and interactions, such as light interaction with vegetation or light’s travel through the atmosphere.
Time Series AnalysisTypically involves video content with many timestamps.Works on a less dense timescale, with data capture events typically happening daily, weekly, or perhaps every ten days.
Application of Super ResolutionCan be used to enhance image details or generate new, plausible images.Utilized to enhance image quality, but not to generate information that the sensor has not captured.
Use of Deep LearningWidely used for various tasks like image recognition, object detection, etc.Despite its potential, the use is limited due to the high costs and resources required.
Image GenerationTechniques like GANs are used to generate new, realistic images.The objective is to faithfully represent a specific geographical location at a particular point in time, not to generate new, plausible images.

Super Resolution Vs GANs

Super-resolution is a technique used in remote sensing to enhance image quality. It draws from correlations within the data, spatial or temporal, to generate high-resolution content. However, while super-resolution algorithms can create highly detailed, realistic images, the goal is not to generate information that the sensor has not captured. 

This emphasizes a fundamental difference with computer vision applications, where the goal of techniques such as GANs (Generative Adversarial Networks) is to generate new, realistic images. In remote sensing, the objective is to faithfully represent a specific geographical location at a particular point in time, not to generate new, plausible images.

Here is a table illustrating some of the key differences between Super Resolution and Generative Adversarial Networks (GANs):

Super ResolutionGANs
Primary PurposeEnhancing the quality of an image to reveal finer details.Generating new, plausible images or data from existing data.
MethodologyUses correlations within the data to generate high-resolution content.Uses two neural networks (the generator and the discriminator) in competition with each other to generate new data.
Data RequirementRequires a single low-resolution input image (single-image SR) or multiple aligned low-resolution images (multi-frame SR).Requires a larger dataset to train both the generator and discriminator.
OutputA higher-resolution version of the input image.New, realistic data that can be images, text, etc., which are not present in the original dataset but resemble it.
ApplicationsUsed in remote sensing to enhance image quality, medical imaging, video streaming, etc.Used in image synthesis, text to image translation, data augmentation, style transfer, etc.
LimitationsMay introduce artifacts or inaccuracies if the low-resolution input lacks certain details.GANs can be difficult to train due to the challenge of balancing the generator and discriminator, and they can sometimes produce unrealistic results.

Deep Learning: Promise Vs Practicality

Deep learning has been hailed as the golden solution for turning unstructured data into meaningful knowledge. But despite this, many operational systems in Earth Observation today rely more on traditional machine learning for tasks such as land cover classifications. A major reason arises from the high costs associated with deep learning. 

Training deep learning algorithms is computationally expensive and requires large volumes of data. In many instances, the resources needed to deploy such algorithms often outweigh their benefits – especially when compared to less resource-intensive alternatives such as random forests.

Synthetic Data as Training Data

One application of GANs is the creation of synthetic data to train deep learning algorithms in remote sensing. Given the vast amounts of data required to train these algorithms, synthetic data generation can be a helpful approach.

Synthetic data may help fill gaps for phenomena that are not fully understood or lack physical models. Here, GANs can produce realistic images that help train and calibrate models. But in cases where extensive prior knowledge and physical models exist, such as vegetation growth or deforestation, synthetic data may not be necessary. 

Remote sensing has long employed physical models to generate images and understand sensor behavior. If the physics is known and the necessary information is available, synthetic images may not hold significant additional value.

Vectors in Machine Learning

In machine learning, vectors of data are produced by transforming spectral bands of the input data into a different representation – which contains meaningful information from the original images. As a summarized dataset, vectors can easily be used by downstream machine learning models. 

In many ways, feature embeddings can be likened to Principal Component Analysis (PCA) which simplifies data by projecting it into a different space, with each component being orthogonal to the others and holding maximum variance. However, feature embeddings go beyond PCA by taking into account additional correlations and complex information in the data – which PCA might miss.

Is Coding Knowledge Necessary in Earth Observation?

A future where professionals in Earth Observation cannot create their own software seems unlikely. The landscape of data science can be visualized as a Venn diagram with three crucial components: computer science (coding, software engineering), applied math (statistics, AI), and domain expertise. To succeed in Earth observation, it’s beneficial to have knowledge in each of these areas. 

Even if one’s primary expertise lies in geosciences, it is beneficial to learn coding to implement things. Conversely, if someone comes from computer vision and understands statistics and AI, they should also engage with the physics of the phenomena they are observing.

For those starting out, Python is highly recommended as the language of choice. With a multitude of libraries available for remote sensing, as well as AI and statistics, Python serves as a versatile and powerful tool in the field.

Unexplored Potential: Higher Resolution Time Series

Higher resolution time series, despite their immense wealth of information, are not fully exploited. Programs like Landsat and Sentinels provide extensive data, sometimes at a frequency of every five or six days. However, often only a few data points are used for applications like mapping, detection, or variable estimation. 

This is often due to the sheer size of the datasets. For instance, monitoring agricultural fields over an agricultural season with high-frequency, high-resolution data can quickly amass massive datasets. The resources required to download, process, and extract valuable information from such data can be extensive and expensive.

Luckily, platforms like Google Earth Engine, Microsoft’s Planetary Computer, or Sentinel Hub offer interesting possibilities, providing access to these massive datasets along with the computational resources to process them. These platforms present an opportunity to conduct studies using long-time series data, which were previously limited due to data and computational constraints.

Sponsored by Sinergise, as part of Copernicus Data Space Ecosystem knowledge sharing. dataspace.copernicus.eu/ http://dataspace.copernicus.eu/

Related Podcast Episodes

Super Resolution

Fake Satellite Imagery

Sentinal Hub

Google Earth Engine 

Microsofts Planetary Computer 

About the Author
I'm Daniel O'Donohue, the voice and creator behind The MapScaping Podcast ( A podcast for the geospatial community ). With a professional background as a geospatial specialist, I've spent years harnessing the power of spatial to unravel the complexities of our world, one layer at a time.