Lesson Progress
0% Complete

At first we will introduce the most commonly resolution that is used to describe the abilites of a sensor, the spatial resolution. Understanding which surface characteristic are combined in the signal return of an individual pixel.

What is spatial resolution?

Generally, when we talk about the spatial resolution, we describe the smallest angular (horizontal) separation between two objects. From a remote sensing perspective, this is measured in pixels or cells and is often measured in metres and arranged in a square or rectancluar shape.
A remote sensing instrument with a spatial resolution of 10 m will consequently collect information from pixels that cover an area of 10 by 10 m (such as Sentinel).

Impact factors of spatial resolution

Dependencies of spatial resolution

How much detail a sensor can provide in term of spatial detail is dependent on different parameters. Generally, if a platform is in greater distance (higher altitude) of the Earth’s surface it is able to view larger areas (swath width) while the spatial resolution decreases. Exemplary, ESA’s Sentinel-2 monitors our planet at max. 10 m resolution with a swath width of 290 km. In contrast, NASA’s MODIS instrument reaches cell sizes of max. 250 m with a much greater swath width of 2,330 km.

The Instantaneous Field of View (IFOV) is the fixing point of the spatial resolution of passive sensors. It is often described as a cone characterized by the visibility angle from the sensor which can monitor a certain portion of the surface. These values multiplied with the distance between ground and sensor results in the size of the resolution cell.


Comparison of spatial resolutions

In the interactive element below, you can slide through the variations of (mostly) commonly used sensors. They visualize a part of the Skukuza rest camp which is located in the Kruger National Park, South Africa. Explore how details get mixed or disappear completely.

In recent years, more high-resolution data became freely available and this trend will prevail in the upcoming decades. While low resolution (coarse) data sets have dominated the past, high resolution (fine) data will be the go-to input information for state-of-the-art remote sensing science, taking also into account the steadily improving processing possibilties on personal computers and cloud servers.


Mixed pixel problem

Mixed pixels are problematic in satellite and airborne imagery. With worse spatial resolution more objects are combined into on resolution cell. Have a look at the image slider below and see how lowering the spatial resolution leads to mixed pixels. Note how some of the small buildings get mixed up with the surround trees in the 30 m imagery.


If objects are larger or at least equal to the spatial resolution of the respective sensor, they will be mapped in our data sets. There is also the chance that the brightness (e.g. a building) of an objects smaller than the cell size is high enough to impact the cell statistics. If this is the case, analysis could be carried out on a sub-pixel domain to enable the identifcaion of parts of a resolution cell.


In this topic you learned about XXX. Now you are ready to move to the next topic: XXX