Samuel Greenaway is the Commanding Officer of the NOAA Ship Rainier, a hydrographic mapping ship currently based on the West Coast of the United States.
Cmdr. Greenaway’s been involved with ocean mapping for about 16 years as a Commissioned Officer in the National Oceanic and Atmospheric Administration Commissioned Officer Corps (NOAA Corps). They operate ships and aircraft supporting NOAA’s missions, including hydrographic mapping and fisheries research.
He is the definition of an accidental ocean-mapper. He was looking at the weather one day and found an ad for the NOAA Corps. Never heard of them before. He clicked on the link, ended up applying a couple of months later, and off he was piloting ships.
It’s an international project with a simple goal — to map the deep oceans (200m or deeper) of the earth by 2030.
It’s an ambitious goal and maybe a surprising one to many.
Turns out it’s not.
People have been mapping the oceans for a long time.
In the United States, the predecessor agencies to NOAA and theSurvey of the Coast go back to 1807 — Thomas Jefferson’s time. We’ve been hard at it ever since.
There’s been a tremendous amount of mapping effort not only here in the United States but around the world.
It’s a huge problem.
The earth’s big, the oceans are deep, and it’s hard to make measurements.
There’s an impression that this is already done. Sure, you open up Google Earth, and it looks like we’ve got a complete map. But most of that data isn’t from the measurement of the seafloor’s depths — it’s from inversion from gravity.
They measure gravity, rather ingeniously, using satellite altimetry and looking at the slope of the sea water’s surface. Imagine a flat chunk of ocean with a perfectly flat sea bottom. If you stick a sea mountain on the bottom, gravity pulls the water a little closer to it, and there’ll be a little hill of water sitting over that sea mountain.
You can see that with satellite altimetry.
It’s gravity inversion from satellite altimetry that forms the bulk of that data you might see in the oceans in Google Earth, combined with global maps, research, and hydrographic mapping cruises.
Find your favorite place in Google Earth and zoom in. You’ll often see the higher resolution data popping through, looking like a little snail trail across the ocean, or some tiny pinpricks that are point soundings.
The mapping work that I do supports commerce and safe navigation.
Our primary goal is to update the navigational charts to support safe navigation and support commerce — a vast percentage of all international trade still travels by sea. It’s been the case, and it’s going to be the case for a long time in the future. Safe navigation and commerce are a big part of that.
Some areas just haven’t been surveyed very well.
The next goal is resource management. Fisheries or habitats, oil and gas, submerged minerals or lands, and even siting of offshore wind farms. A base map of what’s down there is critical if you need to manage or exploit those resources or manage the exploitation.
Then there are hazards. The sea is a wonderful thing to have on the earth, but it also comes with its dangers, like tsunamis. It’s important to understand the process driving tsunamis; the slumps or the uplifts behind the generation of tsunamis, and the propagation — a tsunami is a wave that extends to the seabed. It’s channeled, and it’s steered by the bathymetry.
With a better understanding of the seabed terrain, we can do a better job at modeling the propagation of tsunamis and issuing warnings to the right places.
Another reason we need to map is that a vital part of the ocean circulation model is the seabed. We need this data to understand ocean circulation, global tides, and ocean dynamics. Bathymetry, the frictional drag, and the roughness are a big part.
Last, and fundamentally, we need to map because we don’t know what’s on the bottom of the ocean.
We havesome idea — a 10-kilometer-kind-of-idea of what’s on the bottom of the ocean. What we learn from the stuff that we don’t yet know is probably the most exciting thing we’re going to learn.
Think back to Marie Tharp and Bruce Heezen in the 50s. They penciled down a couple of acoustic transects across the Mid-Atlantic Ridge that directly led to the development of plate tectonics, which has revolutionized our understanding of the earth.
We have an obligation to understand the earth we live on.
It depends on what equipment you use.
The 10-kilometer-kind is about as good as you get with gravity derived measurement.
A typical deployed deep-water mapping equipment is one to two degrees resolution. That’s tens to hundreds of meters. It gets a lot better in the shallow areas where we might even get resolutions at half a meter. We can even do better, if we needed to, for more high-resolution products.
The fundamental resolution gets worse or coarser the further you are from the seabed. To improve it, you put the instrumentation closer to the seabed, either through an autonomous underwater vehicle or a deep-towing instrument close to the seabed — a challenge on its own. Plus, it’s slow going.
But if you look for small things in deep water, you need sensors down closer to the seabed.
Because the ocean’s in the way.
We’re already using satellites to look at the slopes of the surface of the ocean. We derive a gravity field and then invert it to get the symmetry — a lot of assumptions go into that.
Water is mostly opaque to electromagnetic energy. You just can’t see through water from a satellite or an aerial platform.
Radio, lasers, and light don’t penetrate well — only limited methods can be applied in shallow water, which is a big part of solving the mapping problem.
In shallow water, we use aerial LIDAR, which is laser-based ranging measurements. There has been a lot of work in using imagery.
If you’ve looked out the window of an airplane as you’re flying somewhere, you may have noticed you can see through the water at places or that it looks deeper in one place and shallower in another. We can derive depths from either satellite imagery or aerial imagery. Still, for the most part, you just can’t see through the ocean.
The primary tool for mapping the ocean floor and the ocean is acoustics, using sound.
Not surprisingly, sound propagates well through the ocean. It’s what animals in the ocean use to navigate, communicate, and live their lives.
The frequencies we use for ocean mapping systems are on the low end.
A low-frequency system in the acoustic multi-beam mapping community is something like 12 kHz, and yes, you can hear those.
The upper range of human hearing is somewhere around 20 kHz, which is a trippy sound.
I was out doing a mapping cruise on a US Coast Guard icebreaker in the Arctic Ocean. I was new to the 12 kHz system. I could hear something clicking away. Sounded like a cricket because there was a frequency modulation in the pulse.
It was driving me nuts. I got up in the middle of the night to find this cricket. I didn’t immediately connect that I was in the Arctic Ocean, and it’d be doubtful to find a cricket. At the bottom of the ladder on the way to the bottom of the ship, I realized this is the sonar I’m hearing.
So yes, humans can hear what’s within the human hearing range. Most of the systems we use, for anything other than the very deep ocean, are higher than human hearing — 100 to 500 kHz for some of the very shallow water systems.
It’s a concern and an area of extensive research.
The anthropogenic sound in the ocean is much louder than it used to be. It’s a concern in itself.
The scientific consensus is that nothing listens at the high-frequency stuff, for example, the over 200 kHz we use in shallow water.
It’s down in the 12 to 30 kHz range where animals communicate and can certainly hear it.
A lot of work’s been done looking at the effects of the instruments we use, which is a very direct beam or targeted piece of sound. Looking at the animal response, it’s difficult to say categorically that this humpback whale over thereis or isnot bothered by what I’m doing over here.
You can see interference from dolphins operating in similar frequencies with some higher frequency systems, say a 100 kHz system. They echolocate away, follow the ship around, and don’t seem bothered.
But again, that’s just observational — not hard science.
The ocean’s primary anthropogenic noise is a big concern from my perspective. Pile driving is incredibly loud and broadband because it’s an impulsive sound. Other prominent ocean noise sources are ship traffic, cavitation, and propellers.
Other types of surveys, air guns for seismic surveys, and sparklers are also loud.
Getting the science right is essential — all sound in the ocean isn’t the same, and all anthropogenic sound in the ocean isn’t the same.
Positioning is a big part of the mapping challenge.
Let’s back up a little and talk about some of the specific technology we use.
There is sonar like you see in an old WWII movie where there’s the ping, and then you hear an echo back.
Or like a recreational echo sounder on a small boat that gives you depths. It sends out a pulse of energy through the water, which bounces off something. We measure the time it takes for it to come back and convert that to a distance based on a good guess of what the sound speed in the water is.
That technology’s been around for about 100 years and matured since WWI and WWII, primarily spurred on by the threat of submarines.
Single beam sonars look in one direction. You make an eco-ranging measurement, say straight down beneath the boat. That’s how a commercial off-the-shelf recreational echo sounder works.
We had modern systems since the 70s, but multibeam echo sounders only came about in the last 20 years.
Instead of making a direct measurement beneath the boat, they look simultaneously off to the sides, roughly 60 degrees to either side. Rather than just making one measurement, they make a swath of measurements as you drive the ship along.
That’s probably one of the primary tools we’ll use to complete this mapping of the ocean.
Positioning is critical to any measurement you make, and you need to reference that back to some other framework.
We need to know the latitude, longitude, and height for the positioning and the six degrees of freedom. We also need to orient those sensors in the other three degrees of freedom there; the role, the pitch, and the yaw axis.
Exactly like positioning an aerial camera system.
Measurement and integration of all those sensors are critical in doing an excellent mapping job.
The datum question is important.
How deep is the ocean? That’s relative to the water’s surface.
Say, it’s 2000 meters in this place. But the water surface is dynamic; it goes up and down with tides, there are annual cycles, and the sea levels are on the rise. Defining the datums is a technical challenge all of its own.
As you get into the shallow parts of the oceans, datums turn into a big part of your error budget when you make seabed maps.
Traditionally, we’d measure that relative to the sea level. We’d install tide gauges and determine local sea level datums, like Mean Sea Level or Mean Lower Low Water, or in other areas, lowest astronomical tide.
There’s an entire group in our organization that works on datums and water level issues.
Many before me talked about satellite-based augmentation services for GPS, real-time kinematic approaches for GPS, and post-processing kinematic — those are all techniques we use to get accurate 3D positioning.
We’ve also made a tremendous amount of progress in understanding the datum relationships between, say, the ellipsoid that you might access with GPS and the datums we need for our products, such as mean sea level or mean lower low water.
We need time, distance, and speed measurements.
We need the speed for two reasons.
One, even if you’re looking straight down through the water column, you need a speed-time distance to do the math. The oceans are predominantly horizontally stratified; they get colder as you get deeper, and the salinity varies. Those impact the speed of sound.
As we look off, or nadir off, not just looking straight down, but looking off to the sides, we have to correct for the refraction of that sound in the water column.
That’s a big deal.
We measure the salinity and the temperature both as a function of depth to correct for the speed of sound and water and the refraction caused by changes in sound speed through the profile.
These are also our focus of development, and that is whatelse can we see with these acoustic systems?
We can infer some properties of the seabed by looking at the acoustic reflection we get back.
The goal of an acoustic inversion is to discern what is on the seabed by making acoustic measurements alone.
Is it gravel or sand? If it’s sand, how coarse is it?
That’s still an elusive goal. But you can partition different areas and tell if stuff A differs from stuff B. And that stuff C sure looks like stuff D we saw over here, and that stuff D we took a sample of was coarse sand.
That’s what we call acoustic backscatter. It’s looking at the intensity and the properties of what’s returned from the seabed along with the bathymetry and the structure of the seabed. They’re helpful signals if you’re interested in partitioning up the bottom.
Perhaps you make a habitat or a resource map, delineating areas of, say, sand, you might be interested in for resource extraction.
The seabed is an important signal for the acoustic backscatter.
The acoustic signals may bounce off all sorts of stuff in the water column.
Fish and plankton, for sure. Sometimes an enormous “cloud” of fish can ruin the return from the seabed. The energy we send out either bounces off stuff we’re interested in or gets scattered by stuff we’re not.
Absolutely. We have imagery of whales showing up in the acoustic beam, although it’s rare.
We do. That’s a big part of what we do for the classic hydrographic survey.
We grab samples of the seabed and then either return that sample or analyze it out in the field. Besides samples, we’re also moving towards getting imagery of the seabed to help ground truth and constrain what we think might be down there. We use drop cameras and autonomous underwater vehicles with cameras.
All this is an active area of research.
We have global models of the parameters we care about — salinity and temperature. Those models have been fed for many years from people taking casts or profiles of the water CTD (conductivity, temperature, depth), not only for mapping work but also as a fundamental measurement for an oceanographic cruise.
We’ve done many of these measurements over the decades. They’ve been used to feed the development of both global and regional models. They are sometimes adequate to then use operationally — we know enough about this region; we don’t need to take another profile here. This model is good enough.
It’s a virtuous cycle of making measurements and contributing them back to a database to inform models.
The modeling effort is another thing. It’s a significant effort, both globally and regionally. When we plan a survey operation, we look at expected variability in the water column and how challenging it’ll be to capturethe correct, ora correct, or a correct enough profile.
Some places are just difficult to survey because of the variability in the sound speed — the gradients are steep, or they’re variable. As an example, surveying in the mouth of the Chesapeake Bay is problematic at the summer height because it’s so stratified. It’s like looking across a hot tarmac on a hot sunny day with waves of refraction coming off the surface.
Satellite is still a key part of the ocean mapping effort — doing bathymetric inversion from gravity determined by satellite altimetry (Smith and Sandwell).
It’s still a useful data set, but it’s fundamentally low resolution.
It’s a resolution of 10 kilometers. If we want to do better than that, we have to put sensors in the water.
Autonomous systems are a big part ofhow we’re going to get this done. Wecan, and wedo have autonomous systems in use right now. It’s an active area of research and development.
Underwater technology is a little more mature for autonomous systems. AUVs (Autonomous Underwater Vehicles) have been in commercial and academic use for a couple of decades now.
You’d think that a deep underwater environment would be more difficult. It certainly has its challenges. Pressure and positioning are two, and engineering challenges come along with them.
But once you get under the water, the environment is a lot more benign. You don’t have to contend with waves, and the currents are a lot less in the deep ocean than they might be in the coastal shallow water areas.
Surface autonomous systems are being fielded now, and it’s an active area of research.
It’s been relatively accessible to drive boats around on the surface. But it’s not easy to train someone to drive a boat well.
The other option besides putting a sensor underwater is to build a submarine. Those are expensive and complicated, too.
The matter of what tools are at disposal is probably what’s driven the advances in the underwater field.
Surface vehicles are coming. There’s been a tremendous diversity of approaches to fielding autonomous systems on the surface, such as a wave glider or wave-powered vehicle. There have been significant advances in sail, like wind-powered vehicles from a company called Sail Drone (Alameda, California) that made some incredible advances in a robust operation.
The issue is more complicated than “Can a robot drive a boat?”
Robots can drive boats.
Can they drive boatssafely, though? Can they maintain the engines? Can they, being the robots, tailor the sensor systems to the environmental conditions?
The driving is only a piece of the whole mapping challenge. It’s bigger than just driving a boat around.
A lot of work has been done on crowdsourcing bathymetry.
Companies, such as Olex (Norway), do just that — they provide a service to commercial fishing fleets operating in lesser charted areas. They synthesize data folks collect when they’re out in the ocean, combine that with everybody else’s, and provide that back to their clients.
In a governmental effort, we’ve looked for crowdsourcing applications. Take flagging — we’ve long had a history of looking to our users, familiar with the nautical chart, to flag discrepancies such as wrecks or shallow position approximates. That’s long been part of our mapping strategy.
We’re always looking at what we can do with crowdsourcing bathymetry — it’s an active area, although somewhat limited. The crowd, particularly in the deep ocean, goes in the same place. The shipping lanes between major ports are well established.
The improvements in the acoustic systems are tremendous. The systems are far more reliable, and the resolutions are way better. So are the signal-to-noise ratios, the systems’ stability, and the ancillary products.
There have also been tremendous improvements in aerial LIDAR. That’s made a big dent in mapping shallow waters, which are, somewhat paradoxically, often the most problematic areas to survey because it’s slow with acoustic sensors.
Advances in imagery derived from bathymetry have also been significant.
Is there something out there that’s going to replace large arrays on the bottom of ships as our primary acoustic sensors?
I don’t know. I suppose there could be, but perhaps folks reading this will start working on the problem.
It’s a big problem.
Because you have to take so many more runs to cover a larger area.
A system, say a modern multi-beam system, looks at a fixed angular distance off to the side, say, 60 degrees either way. It will map a swath of coverage as you drive along around three to four times the water depth.
If you’re in a mile of water, that swath width is three to four miles wide. If you’re in 10 meters of water, that’s only 30 to 40 meters wide. It makes a big difference.
The depth is the primary driver of survey efficiency.
Simply put, we need to do it.
I’ll be impressed if it will be completed by 2030. But even if it’s not complete by then, it’s not something you succeeded at, or you didn’t.
Any progress toward the goal is valuable.
It’s a resource question, really.
If you say you want to go to the Moon in 10 years and throw money at it, you can do amazing things.
There have been some estimates of how much effort and ship miles it will take — 150 to 200 ship years of a ship steaming continuously.
It’s a matter of how many ships you can get out there. How many people can you cooperate with? How much data sharing can you manage?
Solving the problem requires international cooperation.
The oceans cover approximately 70% of the earth’s surface. It’s the largest living livable space on our planet, and there’s more life there than anywhere else on earth.
Last year, less than 20% of the global seafloor had been mapped with modern high-resolution technology. This may be the last frontier.
I sincerely hope the Seabed 2030 project is a massive success. I can see some tremendous benefits from knowing more about this vast space and our planet. It seems incredible that we know more about the Moon’s or Mars’ surface for topography than we know about this substantial livable space on our own planet.
Be sure to subscribe to our podcast for weekly episodes that connect the geospatial community.
For more exclusive content, join our email. No spam! Just insightful content about the geospatial industry.
To put it simply, point clouds are a collection of XYZ points that represent some real world object of nearly any scale.They can be generated in a few ways. As geospatial scientists, we mostly work with LAS/LAZ data collected by aerial LiDAR (light detection and ranging) scanners at varying scales, from landscapes, down to project sites. We may also derive point clouds from highly detailed orthoimagery of an area, such as from the products of a drone flight.
As a data scientist, you don’t just go in and solve problems. You make recommendations to multi-faceted issues so that you get a fantastic model in the end. You’ll also be advocating a better use and understanding of the data while you do that.