Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
podcast
Filter by Categories
ArcGIS Pro
GDAL
GeoJson
Map
Python
QGIS
Uncategorized

Introduction to Synthetic-aperture Radar (SAR)

Introduction to Synthetic-aperture Radar (SAR)

Eric Jensen is the president of ICEYE — they build and operate their own commercial constellation of satellites.

Eric’s background is in mechanical engineering. He spent the last decade working on everything from crew capsules for NASA, to reusable space planes and satellites.

His remote sensing awakening happened several years ago. He realized how wonderful it is to build “space things” that have such an immense and direct impact on human advancement.

WHAT IS SAR?

Synthetic aperture radar (SAR) was developed in secret during World War II.

It was used to detect objects using rangefinder techniques by transmitting pulses of microwaves and waiting for them to bounce off something.

They figured out the time between when the pulse was sent and when it was received and calculated the distance to some object from where the pulse was.

If you stand at the top of a well and yell down into the well Hello, then count 1-2-3 seconds, you’ll hear a Hello back at you.

Now you know the time it took for your voice to travel down the well, bounce off whatever’s at the bottom, and come back to you at the top. And because you know that time and you know that the speed of sound multiplied by time equals distance, you can figure out how deep the well is.

The same principle applies to radar, in any domain, and also from space.

Back in World War II, the Royal Air Force was developing radar techniques to figure out if they could look across the English Channel and “sense” when some adversarial thing was heading in their direction.

They’d send out a signal and have it bounce off a boat or an aircraft. It then reflected wherever that emanating source was on the British side and determined the distance between the British border and this thing coming towards them.

The idea was to create an advanced warning system that would aid in preparedness for an oncoming attack.

Many SAR people know the story of RAF Air Marshal Tedder (1890 to 1967). When the Brits were first developing this system, their first test failed in front of the Air Marshal, who was the head person. His famous quote about this radar was,

“The absolute ingenuity of this idea almost blinds one to its worthlessness.”

And so SAR almost died on the vine a long time ago.

All light — visible and invisible to our human eyes — comprises electromagnetic waves. The electromagnetic spectrum defines the type of light available in the universe based on frequency and wavelength characteristics.

Visible light, or the rainbow of light we see, is somewhere between the infrared and ultraviolet portion of the electromagnetic spectrum. When we see things, it’s light coming into our retinas with wavelengths on the order of hundreds of nanometers; 200 to 700 nanometers define the different colors of the light.

Radar is the frequency and wavelength regime characterized by microwaves. Much longer wavelengths than visible light, on the order of centimeters.

That’s just the basics of where you’d find radar and why it’s different from optical — as we call it in the GIS community.

SO WHY IS THE TITLE “SYNTHETIC APERTURE” ADDED TO THE RADAR? WHY CAN’T WE JUST USE A RADAR?

That really gets down to what needs to be done with this instrument.

A typical term we hear across the community is resolution. In layperson’s terms, resolution defines the image’s quality or how clear it is.

When you have a radar instrument, radar data’s spatial resolution is directly related to the sensor’s wavelength and how long it is.

Suppose you have a single C band radar, which operates around five centimeters in wavelength. To get a 10-meter resolution picture on the ground (which means you’ve got to identify something on the ground that’s 10 meters from something else) you’d need a 4-km long antenna.

It’d be too massive to be structurally sound, impractical to fly it up and deploy in space.

The moniker synthetic or synthetic aperture is used because we can create an effective sensor that operates like a very long antenna but is much smaller.

Imagine sending one pulse out from a super long antenna. That pulse goes down to the ground, it reflects off a car and goes back to this super long antenna which will catch pulses bouncing off the car at different angles.

We can process that product and make an image of it.

Super long antennas are not practical to fly in space. Space-based antennas send many pulses in rapid succession and then collect those reflected microwaves from various angles at the various times they bounce off an object.

Instead of one long antenna that sends one big pulse at one time and then collects the pulses that come back, you have a much smaller antenna that sends lots of pulses in quick succession over time as the satellite goes through space.

It “listens” to the pulses that come back to it when it moves through its orbit.

That’s why the name synthetic is applied to the radar.

IS SAR AN ACTIVE OR PASSIVE SENSOR?

SAR is an active sensor because it has its own power source. It’s using that to send out pulses of microwaves that can be collected when they come back from the thing they’ve reflected from into space.

camera, on the other hand, is a passive sensor. It focuses incoming light onto a detector array with millions of tiny pixels on a 2D grid. It captures what’s visible in the direction we’re looking at. The visible light comes into it because we have ambient light from the Sun illuminating the Earth.

SAR is different because it sends its own signals.

WHAT CAN YOU DO WITH SAR?

SAR is excellent at measuring distance.

It’s a range-finding tool.

That’s it.

Using various frequency bands in the microwave portion of the electromagnetic spectrum allows us to detect “different things” out there and measure distance based on the differences in frequency and wavelength.

l band, S band, C band, or X band are all microwaves but with different frequencies and wavelengths. Our ability to determine things like roughness or surface property comes down to the distance characteristics being measured by each one of those bands.

The higher the frequency, the higher the resolution potential for an image, but the worse the penetration power.

For example, a Ka band is used in communication systems. It’s also a microwave. It’s rarely used for SAR because while it might offer great resolution since it’s very high frequency, it’s easily absorbed by water moisture in clouds.

If you had a SAR sensor operating in a Ka band from space, you could get high resolution when clouds weren’t present.

But if you had immense cloud cover, your signals would be attenuated by it, and you wouldn’t be able to get the signals to reach their target or get them back from where they’re reflected.

Like X bandl band, or even UHF band, lower wavelength and frequency bands offer the opportunity to get good resolution pictures. They also have that desirable balance because they penetrate through things like clouds.

l band and UHF band can penetrate vegetation. They can tell us things about biomass or moisture in the soil. All these insights are derived from the basic principle of SAR, which is the distance between where I am and where something else is.

We can do that with a building or a water molecule inside or buried just underneath Earth’s surface. It’s only a matter of how sensitive your sensors are and what bands you use.

SAR is excellent at measuring with millimeter precision the surface of the Earth from space.

CAN WE MAKE ACCURATE DIGITAL ELEVATION MODELS?

SAR naturally lends itself to creating digital elevation models.

What do we need today to create an elevation model from electro-optical imagery?

You need to collect images from multiple angles of the same type of target, hopefully around the same time of year or season, and fuse those together to create a product that would look like a 3D-type product.

Many companies do that already efficiently, creating incredible derived datasets and products.

The cool thing about SAR is that because these systems can collect imagery, day and night, and in any weather, they naturally lend themselves to what’s called coherent type collections.

You can have a spacecraft collecting an image over the same area of interest from the same point in space. The same type of geometry of the imagery is formed repeatedly.

SAR can create interferograms rapidly. You can quickly discern the difference between two sequential pictures with advanced tools.

Modern systems flying in low Earth orbit can do this type of collection to create an interferometric product, and soon, multiple times per day.

That is something that’s never been done before from commercial systems in space on that type of frequency.

As a result, distances and elevation models can be created and updated rapidly based on the change detected on a daily or even an hourly basis.

SAR is uniquely positioned to do that because it can collect imagery at any time through any weather.

Other systems have to wait for a perfect shot with no cloud cover. It might be weeks before that happens. And when it does, you have to make sure that the orbital parameters are such that the collection’s geometry was precisely how it was before.

In terms of where machine-to-machine interfacing might go, this entire field of study around interferometry will be more vibrant because of a proliferation of SAR systems that provide efficient, commercialized data.

HOW DOES SAR PENETRATE THROUGH CLOUDS?

That really is what’s different about SAR and the images formed by synthetic aperture radar instruments versus images collected by passive optical or electro-optical sensors.

When you have a passive sensor, you can only take a picture of what you see — what you see with your eyes is a great analogy. If you’re up in space looking down, 60% of the Earth is continuously covered in clouds. The places you really care about — where people live — are sometimes as high as 80% covered by things like smoke or volcanic ash.

They hinder your ability to collect images over certain areas of interest.

SAR is an active sensor. Given the choice of a certain band employed with a certain frequency and wavelength characteristic, it can penetrate clouds.

When you form an image, the microwaves go through the cloudsbouncing off the things on Earth, and are detected by the sensor with complete disregard to the clouds they pass through.

That is a unique characteristic of SAR.

And that’s not the only thing.

SAR can also collect imagery at night. You don’t need any background illumination source for an active sensor. A system of SAR satellites can form imagery day and night and in any weather.

WHAT HAPPENS IF A LOT OF SAR SENSORS ARE WORKING IN THE SAME GEOGRAPHIC AREA

There’s a saturation point to all this.

Imagine you have a hundred SAR sensors pointed at the same thing from the same location. They all operate the exact same frequency regime and use C band. There is no simple way to detect the difference between the pulses coming back.

Which C band pulse? At which time? Whose is being collected at what time?

That’s what we typically call interference. It happens all the time in telecommunication systems where you create a link between two points. There are many terminals on the ground looking back up at some spacecraft in geostationary orbit trying to connect to it and get a signal.

Sometimes if you have a high-density of those terminals on the ground, you can’t get the signal easily — you get poor quality garbled pictures. You can’t find the right channels you want for your satellite dish.

Things have gotten much better over time, and early communication systems dealt with that type of problem.

The same applies to new microwave-based sensors or RF-type systems with signals sent right on top of each other. The same sensors try to receive those packets, and if they’re co-located, it can be problematic.

That’s why organizations like the ITU (International Telecommunication Union) or the FCC (Federal Communications Commission) help regulate and promote harmony in the ecosystem to de-conflict, or at least plan not to have those interferences.

WHAT ABOUT RETURN TIMES, ORBITS, AND THE HEIGHT OF THE SATELLITE?

They’re accounted for in the basic operating charter of organizations like the FCC, ITU, and similar others worldwide.

They grant operating licenses to companies that both build and fly communication systems or fly remote sensing systems.

Everyone needs a license to operate in a specific country for certain frequency regimes — both the sensor and the radio frequency — to communicate between the ground and the spacecraft. These organizations have a giant database of which satellites operate in which frequency regimes and what they do.

Where are they flying? What altitude? What orbits? What inclination?

When you submit your plans to get a license, these governing bodies can help de-conflict by saying, “We know you want to do this, but this other company already operates in this specific space. Either talk to them, or we’ll help you figure a way to change your system such that it can operate without interference.”

There is also an international governing body helping adjudicate between countries. But each country has its own unique protocols and rules. Depending on the number of systems, the innovation, and the industry’s vibrancy, they may have more or fewer restrictions in place for who can operate, where they can downlink data to, and the essential things to be very mindful of.

For example, take an airport, like London Heathrow. The satellite licensing British authority ensures that people operating satellites that might send signals down over the UK do so in a manner that does not interfere with what’s going on at the airport.

These organizations and insights are necessary for societal interaction and greasing the wheels of economic efficiency. We wouldn’t otherwise know about what satellite systems are up there and which ones are planned.

WHAT ARE SOME USE CASES FOR SAR? EARTHQUAKES? FIRES?

The real challenge with fires, like with anything else, is responsiveness.

How quickly can you get assets up in the air, on target, and figure out what to do about this thing?

SAR instruments can be used and operated in frequency wavelength regimes to form imagery through smoke, clouds, and even ash.

SAR can collect imagery over a fire and inform teams on the ground of the extent of it.

On top of your typical data source, you layer on SAR data to see through the fire and down to where it’s burning. It can determine the difference between foliage and trees that have burned and those that haven’t, based on the way they look in the image.

This is analogous to how you take a picture with an optical sensor, but the SAR data tells the difference in distance between the trees that have fallen down or burned versus the trees that are still standing.

That’s how you’d notice where a fire line is.

SO SAR LOOKS AT SHAPES

Exactly.

Take that a step further. Let’s say you’ve collected one image over this given fire. The next image you collect will show you how the fire line progressed from one period to the next period.

The shorter the window and the higher the temporal resolution — the frequency with which you made those images over a fire — the better you’ll be able to address where it’s gone and where it’s going in a predictive way.

Hopefully, you can stop the fire much faster.

WHAT ABOUT OIL SPILLS?

It’s possible with some interesting and complex mathematics.

Why are we able to see the difference between oil and water on the surface of the ocean?

In layperson’s terms, it’s because of the different densities. The liquid’s reflectivity is also slightly different — the time it takes for a signal to come back when it hits pure water versus when it hits oil may also be different.

In that case, you “see” the difference between both substances on the surface of the water. SAR is very good at detecting oil spills on the surface.

SAR can look at things day and night, in any weather, and on the surface of the ocean. It can sense if there’s an oil spill or an oil leak, and you can monitor it because it’s typically not moving quickly unless it’s coming from a vessel.

IS THIS NEAR REAL-TIME?

It’s minutes to hours in terms of the time between when an image is collected over an object of interest.

For when it’s delivered in a usable format to someone depends on a lot of things.

How many assets are flying and in what orbit? Can they collect over a particular place and with what frequency?

After that, it’s down to the electronic engineering between the signals that are required. The data is stored onboard and then downlinked to some ground station or passed from the satellite. It goes through either the cloud, some other ground-based network to a facility, or a cloud-based processing tool where your processor is.

After some complex mathematics, the raw signals are converted into something that looks like, what you and I would call, a picture which can be provided to a customer.

Most systems’ responsiveness is minutes to hours, depending on the use cases, circumstances, and boundary conditions.

WITH REAL-TIME DATA, ARE YOU DELIVERING A REAL-TIME PROBLEM TO SOLVE, OR CAN YOU IMAGINE A WORLD WHERE YOU’RE DELIVERING AN ANSWER?

I think that’s the world we live in today. For users of SAR and electro-optical data worldwide, there’s a lot of data sources which is a good thing.

The question is, are there a lot of answers to hard questions?

We have an excellent ability to provide near real-time insights and answers to certain questions. But there are other, larger questions around environmental monitoring, such as natural catastrophes, that are difficult to answer.

Many teams working on developing sets of tools can reduce the latency between when an answer can be provided from when an image was taken.

Things developed rapidly in the past several years for processing imagery. In the 90s, you had PhDs doing complex theoretical mathematics with complex numbers. With SAR, it took weeks to complete calculations for a single image.

With the vibrancy of cloud-based storage and processing, we can now distribute petabytes of storage data with rapid computational power across the world. We can embed software tools to convert raw data automatically into something that’s rapidly usable as an image.

I’m talking minutes to single-digit hours.

On top of that, teams and sections of the value chain create tools that plug into those data sources and might add a bit of latency to the overall process flow before any answers are provided. They ingest the initial imagery products rapidly and add them to databases for pattern recognition and building tools. They do analysis and spit out something that looks nothing like an image but is a derived product.

The final product tells us something we want to know about our infrastructure or society. The pace of the process sped up a lot over the last several years.

IS THIS GEOSPATIAL 2.0? MACHINES COMMUNICATING WITH MACHINES?

The future is one of many machine-to-machine interfaces across the whole TCPED chain (task, collect, process, exploit and determine what’s going on and then deliver).

There are still many humans in the loop, depending on the type of sensor, the company, and the system’s design.

It’s going to be more automated across the board which will help address some major problems, particularly natural catastrophes. You could have an early warning system based on a live feed of electro-optical, SAR, and various other data sources.

It would be resolved through some system and end up with a little light on someone’s desk who has the other button to push to deploy a rapid response team to a flood.

Think about every time you do one of those reCAPTHAs on your phone or your computer when you sign into a website — you identify a walkway, a bicycle, or traffic lights. You’re aiding a neural network in developing an algorithm around the recognition of that thing. The things that you’re being presented with are too complex for typical machines to determine today.

But because we’ve found some fantastic ways to democratize human-in-the-loop training of algorithms through reCAPTCHA, you’ve now got tools being developed with hundreds of thousands of inputs that have statistical significance to them when aggregated. They can train a machine to recognize a streetlight, a bicycle, or a walkway.

That’s advanced and it’s happening while you sleep.

The same applies in the GIS world.

It’s already here in a lot of different ways. Given the vibrancy of multiple data sources, including the rise of SAR sensors, cost efficiencies, the miniaturization of electronics, and entrepreneurs’ productivity, we’ll see that filtered to help with automated detection and resolution of challenges.

SAR is a unique data set in that mix. It’s rich with data we’ve only scratched the surface of for what we can do on the machine-to-machine side of things.

That’s what I’m looking forward to witnessing and being part of.

CAN YOU FUSE SAR WITH RECAPTCHA OPEN-SOURCE STYLE?

I’m sure a savvy entrepreneur will figure out how to mesh the two together for algorithmic training.

We already do that in a way… reCAPTCHA uses imagery, not aerial or space-based all the time, but it’s like an extension of the remote sensing ecosystem — a bite-size capacity.

ICEYE IS BUILDING ITS OWN SPACE PLATFORM. IS THERE A TREND TOWARDS SINGLE-PURPOSE PLATFORMS OR A MULTI-SENSOR APPROACH?

Depends on the mission.

Super high-resolution imagery, capturing sensors combined with communication payloads combined with something else, detecting RF signals on a single satellite… Does the market bear this type of strategy?

All that governs the set of design decisions that justify a satellite of a particular purpose, size, weight, and power.

The trend is towards smaller satellites with more focused missions built more efficiently and designed with shorter orbital lives.

Technology is asymptotic — what happens here on Earth for technological refresh cycles lag their application in space because the environment is much harsher there.

But in the future, we’ll see more analogues to the smartphone in space.

For example, a platform with hardware and firmware will be updated automatically. It will host apps for multiple things and sensors to provide different phenomenologies or collection and communication capabilities.

All of that will end up more miniaturized and efficient in the future. We’re already seeing competent platforms that are much smaller than the platforms that came before.

ARE SATELLITE PLATFORMS TASK-BASED OR REAL-TIME FEED APPROACH?

Task-based approach versus real-time feed is like a proactive approach versus a reactive approach.

proactive approach is when you know the things you want to look at. You task your system or schedule the tasking in advance to look at those things. Maybe you’re interested in deforestation or crop health monitoring.

You know where your soybeans are, they’re unlikely to move, and the cornfields are still where they were yesterday. You look at those as frequently as you can overtime or over a season.

The real-time feeds, in terms of a reactive approach, is when you might have an API that someone could use to task a system in near real-time to change the way it’s operating.

We have the tools and technology to allow for both operational concepts, but the task-based approach is the one most people are familiar with — it’s used more widely and easier to implement.

Modern coding techniques and the efficiency of software systems allow us to do real-time feeds in real-time — a dynamic tasking of systems.

The key thing is, who needs it?

How does a company effectively adjudicate between people who want to task in real-time? How do they prioritize the collections in their scheduling?, so everyone’s happy with what they want to see?

Remote sensing companies have priority users and lesser priority users. How do they make sure that everyone’s satisfied and understands when they’re going to get the data? At what time?

We’ll see more and more systems architected so that they’re open and flexible from a software perspective. As tools develop in the future, the systems will respond in a more automated fashion and in more real-time.

It may not be that anybody can open-source task a system to collect imagery. Still, it will ease some roadblocks we have today in the manual chain of events — somebody sending in an order form to be adjusted by a team, then scheduling the tasking, up linking it to satellites at certain locations in their orbit, performing the collection, and then disseminating it as rapidly as possible.

That will become more and more efficient as we go.

WHEN DO YOU THINK WE’RE GOING TO FIGURE OUT THAT WE CANNOT LIVE WITHOUT SAR?

18 months from now.

The SAR revolution in aerospace is here today. I know I’m a bit of a zealot, but it’s all being perpetuated by multiple factors happening at once.

There’s a great TED talk by Bill Gross about the five factors renowned venture capitalists have seen add up to success for companies.

The team, the timing of the ecosystems and the receptivity to whatever’s going on, the idea, the business model, and the funding being the core elements.

I would say the timing is the least in our control — that’s why SAR’s time to shine is now.

SAR systems to date have been mainly exquisite class — they’ve been developed and operated by companies and governments that built highly capable large systems.

A few things happened in the last decade or so — miniaturization of electronics, reduction in the cost of spacefaring materials, modern manufacturing efficiencies. Software advancements and coding languages allow more efficient control of the spacecraft elements and update those in a repeatable fashion.

We update the spacecraft from a software perspective like we update code for anything else — a smartphone or an application. We can write code agile for spacecraft.

That kind of advancement has been latent in the aerospace industry, but it’s happening now in a major way.

Processing the data that comes off of these platforms is faster. We’re using the cloud to store and process with incredible speed. We’ve got a global network of ground stations.

All these things are bringing SAR out of the shadows into the light. It’s been in the shadows for so long because it operates in the shadows — it’s part of the amazing nature of SAR.

It’s also because the data was so complex, and the amount of computational horsepower required to figure out anything useful to do with it needed a team of expertly trained Ph.D. level analysts.

Companies are making it easier and more natural for us to use the same “language” of remote sensing to talk about SAR and its benefits. These companies devise systems that are producing SAR imagery much more frequently and commercially.

The data is more shareable, open. It’s easier to license and promote the use of amongst unique ecosystems, environments, sectors, and verticals of the remote sensing value chain.

It’s happening now, in real-time, in a big way. 

About the Author
I'm Daniel O'Donohue, the voice and creator behind The MapScaping Podcast ( A podcast for the geospatial community ). With a professional background as a geospatial specialist, I've spent years harnessing the power of spatial to unravel the complexities of our world, one layer at a time.