Anita Graser’s GIS journey started off in applied research. She spent her days analyzing traffic speeds and movement data, and she quickly realized that everything happened in space but also in time. She couldn’t handle all that tedious filter setting to see if there were any patterns just to evaluate the data. She needed tools that could manage time, as well as space.
As a stop-gap solution, she started developing her Time Manager plugin for QGIS back in 2011.
“Everything happens somewhere,”
And therefore, everything happens some time. Why have GIS standards ignored the temporal aspect of data, such as the simple feature standard? Could it be because previously, very little data was available? Software was written for those standards and didn’t have temporal in mind.
Anyone who’s worked with timestamps even in an Excel sheet will know that there are dozens of conventions for writing timestamps, and it’s painful to work with them. Every time you needed something to do with time, things got complicated, and people just kept away from it all.
Many storage formats have time aspects. All of them can, but some support it more natively than others. In the vector case, You can also just add timestamps in the attributes, have timestamps in the metadata or in the layer name.
With rasters, time information is usually put into the file name. In Time Manager, raster support parses the information from the layer name automatically, or you can input it manually.
NetCDF, a more sophisticated file format with proper time information and layers associated with it, is also supported by Time Manager. Great for those using GIS tools because those tools could read the information automatically, and they can be animated.
Mesh data, a rare data set I haven’t personally worked with, is popular in hydrological applications and meteorology. Lutra Consulting creates fancy animations for QGIS, such as showing the movement of the wind or how hurricanes form.
There are tons of formats out there to work with, and we should support them all if we want to create software that covers spatial-temporal data.
Recent developments in remote sensing technology provided us with large amounts of imagery time series. They can be sliced spatially and temporally. This is a big challenge, and we need tools that can efficiently handle large rasters.
The difficulty lies in the lack of standards. Every developer decides for themselves how to handle the time element. Users also need to figure out how to do this by themselves because there is no common understanding of how things should be modeled or what they should be called.
Typically, you’d have an encoding standard that guides you on how features should be stored. There’s also an access standard that guides which functions are applied to the features and what their names should be. As a result, you can calculate intersections, check the features touch in the geometry because all those things have been specified and standardized. Once analysts understand the functions, they know what they get and use it across GIS.
Temporal data is lagging in this respect. Only now people have thought about a moving features standard which is currently under development. Note, this is not for simple features, but at least it’s part of the OGC Standards, and it will represent spatial-temporal phenomena in this standard. There are some shortcomings to this proposed standard, and it’s received minimal interest from the developer side. Because of these factors, there is difficulty in implementing it.
Everyone working on this new standard has done so independently from scratch and had to figure out things by themselves. You may find functionality in one tool but not the same in another tool.
Going into practical issues, spatial and temporal data could change country boundaries or plot boundaries in a city. You could split plots into multiple parts and end up with multiple owners for the form of one plot only for some of them to merge again later. If you need to track the evolution of the ownership and how the spatial and temporal relationships with neighboring plots look like, it gets complicated.
We’re not quite at the “pack it into one feature” yet, and I don’t know of anyone who is trying to do tracking changes as an object.
For administrative areas, boundary polygons have certain time slices. If these administrative areas change because of a recent law or a change in regulations, usually, a new set of areas are created with very little history. Time tracking becomes tricky.
How can you tell what happened to Area X that is part of Administrative Unit A in 2020, but 10 years ago it was part of some other area. You would need to apply spatial joints to figure out the history of these units because they don’t have the previous history stored. Even adding the history isn’t straightforward; you can’t always split something into two new subdivisions.
In the US, for redistricting, areas are often merged and split at the same time. There is no clean history. That’s why we have those time slices. Every couple of years, you get a new set of administrative units to work with.
Ownership of plots of land change continuously. The geometry is stored multiple times, and a new owner attribute is attached to it.
With a higher frequency of data, that IoT is producing, we’ll have to think about how we do things, how we store features, and represent them.
For a subset of spatial-temporal features, this is already less of a problem, and we already see promising approaches.
If you’re into tracking moving features, especially simple moving point features, you can consider this one feature an entity by itself with its own properties. Then you can track it over a certain period. This also applies to moving polygon features, to some extent. If you want to model a storm, it’s possible to model the extent of the storm as a polygon over time.
They are one of the visualizations GIS tools provide for movement data using 3D rendering capabilities.
You have x y on the plane, and then the time dimension is put on the z-axis. With these, you can do a visual exploration of smaller movement data sets. If you only see a vertical line, it’s easy to tell that there is no change in the x and y dimensions, and the object is stationary.
If you see a steep diagonal, you can tell that it’s traveling and how fast or slow it is. For multiple objects, it is possible to know if they meet somewhere along the way, or spend time together in space and time.
It’s a visualization for exploratory analysis. It doesn’t come with automatic tests. Did they really meet in the same space at the same time? If you need to know that, you can have a look and do it yourself if you have something to plot in 3D. Once you have four or more objects, it’s difficult to see the patterns because it gets messy, quickly. This is the challenge of exploring larger movement data sets.
That’s something I’ve also been working on in my research recently with the results due to be published soon. We looked at the movements of thousands of ships in the open sea, and that’s not something you can do with a space-time cube ̶ you need to introduce another concept to succeed.
It’s not black and white, and we don’t have a ton of visualizations for spatial-temporal data either to go on.
Most of them, if they are not the space-time cube, are animations. They are notoriously bad for detecting patterns reliably. A change blindness effect has been proven multiple times. We’re just not good at seeing certain changes in animations. We are much better at noticing small multiple model maps next to each other with more reliable results ̶ admittedly, not as fancy.
It’s decision-makers and news channels who love spatial-temporal data visualization. It’s great for communication and for bringing a story alive, especially if combined with text and context information.
Data scientists should be careful about using visualization for their analytical work because of this problem of change blindness and its related issues.
On the other extreme, you’ve got analytical tools, and you might end up relying on summary statistics too much. Mathematicians and statisticians like statistics and tests for significance. They rarely analyze a large amount of raw data or create spatial-temporal visualization to get a feel for the data besides doing those tests.
The middle ground is understanding the raw data before doing tests. If you need to know how long certain entities have been in a vicinity closer than 20 meters, you can test it. But was the data you used reliable enough to narrow it down to 20-minute intervals, or do you have to be satisfied with the nearest hour? That kind of understanding you can’t get from applying statistical tests. It needs a thorough understanding of the data.
If you are a QGIS user, there is already a plugin for that. It’s called Time Manager.
It started off as a side hustle when I was working on monitoring traffic data and looking into how speed changes in road network links over time and how vehicles move in the network. That’s vector data, moving points, and lines with changing attribute values. If you support points and lines, it will support polygons. It puts a filter on those attribute values in the attribute table of the vector layer on the methodology side. It filters out all the features that are not within the current time frame of interest.
There is also a tool for animating temperature and other raster time series. It collects the year or the timestamp information from the raster layer name, and it can turn the layer on or off. You load up all the different layers with your time information, and they get auto-synced. It looks like an animation. You get a play button that you press and play the animation in your GIS tool. You can also export it as a series of images that you can put in your PowerPoint presentations, or stitch them together as a GIF or a video. Over time, many features have been added.
For web services, Time Manager supports things like the Web Map Service Standard. There is an aspect of temporal support in there, which is rarely used but works well, especially in GeoServer. Whenever you request a map from the web server, you can send it also a timestamp or a timeframe for which you want the map to be drawn. Combine this with Time Manager, and you can get an animation on your desktop GIS and then fetch the individual maps from the web server dynamically. It’s neat if you have massive data sets on the server-side that you want to explore in this fashion.
Some people take Time Manager and its functions and run with it. They use the configurations in the Project Settings of QGIS. They publish them on the web and have Time Manager functionality in their web GIS. If you use a web map service, you can replay it in the web browser.
It’s retiring, but it’s not dead. It will be around, and the code will be online forever in the plugin repository.
The temporal support is moving to QGIS Core. No more plugins! I’m expecting the spatial-temporal filtering to be just as performant as the spatial filtering has been for a long time already. Core developers are adding capabilities for different layer types to have a spatial-temporal setting in the properties of raster, vector, or mesh layers.
The soon to be released QGIS 3.14 will have what’s called a Temporal Controller. It has a similar feel and flexibility to the old Time Manager with those same video buttons for playing forward, backward, jumping ahead, and going back to the start. Plus, it is now integrated with QGIS chord ̶ a dream come true as it was impossible to realize this when it was a plugin.
You can now script all those configuration steps. In Time Manager, you still needed to do a lot of manual layer adding, defining the start and the end of the timestamp and your preferred intervals. With Temporal Controller, you do all this once, then write a Python script to reproduce the configuration for as many layers as needed. You can upload an entire folder with individual files and press play.
There are way too many conventions for how to store time information. Think of the American Standard, for example, month, day, and year. This format can mess up a lot of things. I recommend sticking to the ISO standard timestamp format. Year, month, and day. This is the only reliable format to sort the dates.
In any case, You should document what format was used because people will have to look it up and verify that they parsed the data correctly.
Timezones can be tricky, too. Especially if you have global data sets. It’s better to have local timestamps even if everything is stored in UTM or UTC. It will make an enormous difference if you have to track data, and you need to calculate the movement speeds and directions.
If you store your data in a database, like PostGIS, that database server has a specific time zone setting. It’s either your local timezone, or it’s set to UTC. Then, you get some clever application logic, and it will try to transform from UTC to local time without telling you. You end up wondering why there’s a two-hour gap between the data you’re looking at and what you see in your raw database. Just be aware, so it doesn’t sneak up on you and surprise you. The result is ugly and mind-bending.
Spatial-temporal data and data sets in GIS are proliferating. Weather and climate-related data sets in meshes are also on the rise. Series of remote image sensing coming in at high speed from satellites that are taken multiple times a day are here. We will need tools that can handle this kind of data, provide access and analytical capabilities.
There is also the never-ending need to track data such as GPS tracking of animals, vehicles, goods, or tracking by other locations systems like Wi-Fi or Bluetooth. They help us understand mobility, and we can improve the mobility systems world.
My focus is now on a new project called Moving Pandas, built on the Pandas platform, which has a finance and time series analysis background. As a follow on, GeoPandas incorporated spatial support to Pandas. Moving Pandas will add spatial and temporal elements and provide value for movement data.
It already has visualization options and background maps integrated. The only thing missing is 3D visualization with a map. GeoPandas plotting libraries are still 2D; we need to wait a little while before we get to the 3D stuff.
Other folks are also independently building on Pandas with very similar ideas to add functionality for movement data. We’ll see how these develop and if we can converge at some point. This is undoubtedly the right time to start doing this work.
Have you attempted to manage your temporal data, but you gave up on your Excel sheets already and can’t be bothered to spend more time on it? Did Anita convince you to at least try to use the ISO standard time format that’s the year, month, and day? What challenges are you facing regarding spatial-temporal data that you need help with?
Be sure to subscribe to our podcast for weekly episodes that connect the geospatial community.
For more exclusive content, join our email. No spam! Just insightful content about the geospatial industry.
To put it simply, point clouds are a collection of XYZ points that represent some real world object of nearly any scale.They can be generated in a few ways. As geospatial scientists, we mostly work with LAS/LAZ data collected by aerial LiDAR (light detection and ranging) scanners at varying scales, from landscapes, down to project sites. We may also derive point clouds from highly detailed orthoimagery of an area, such as from the products of a drone flight.
As a data scientist, you don’t just go in and solve problems. You make recommendations to multi-faceted issues so that you get a fantastic model in the end. You’ll also be advocating a better use and understanding of the data while you do that.