Lesson 1, Topic 1
In Progress

Basic elements of image analysis

Learning objectives of this topic

  • Importance of visual image analysis
  • Shape & size in remote sensing imagery
  • Brightness & shadows and their physical meaning
  • Typical patterns in remote sensing imagery

A first step to assess the possibility to apply remote sensing data sets for specific applications is to visually interpret an image. The ability to locate certain features and consequently interpret these from a remote sensing perspective needs to be trained, just like models that are used to retrieve higher level information from remote sensing data.

One example for this is given below. You can see how different types of dunes appear in theory (left) and with the view from above in remotely sensed data sets (right). Depending on illumination geometries, the shape can be altered in the remote sensing images and thus be misinterpreted. By visually inspecting these objects we can exemplary derive the direction of wind, which has formed these elements.

Dunes are very complex unstable features of dry landscapes.

How can we describe, what we see in the data?

To describe the visual impression that we get from remote sensing imagery, we can look at a variety of parameters that can, as most things we can identify using our eyes, also be detected automatically. Whether it is optical or microwave data, here are a number of attributes that you can look at when doing a visual inspection of remote sensing data:

1) Shape (size)

Very distinctive to the human eye is the shape as well as the size of remote sensing targets. Especially man-made structures are characterized by sharp edges and regular shapes, such as buildings or agricultural fields. Natural features such as forests usually create more irregular shapes. By experience, the combination of shape and size form distinctive pictures, which remote sensing scientists can use to analyze imagery visually.

Sentinel-2 composite (01/12/2017) of an agricultural area north of Detroit, USA bordering Lake Huron (ESA 2017).

2) Brightness (intensity)

The reflectance at ground, which we can calculate from the top-of-atmosphere reflectance for optical and the measured backscatter for microwave data, delivers a certain value for each pixel describing the intensity or, as we see it, the brightness. Using this parameter, we can, for example, easily distinguish between a wet and dry surface. For example, you can see changes in backscatter intensity of two Sentinel-1 C-Band images acquired during dry (May 2016) and wet season (November 2016). On the edges of the reservoir in the centre of the image, you can see how the water level or moisture level of the soil leads to a significant change in the brightness, as can be observed.

3) Shadow

Shadows are inherent to all remote sensing acquisitions with certain illumination angles. Whether it is topography or objects such as buildings or trees, shadows do not only help to locate these feature but can also hinder the interpretation of satellite imagery. Sources that acquire data with a nadir (= 90°) viewing angle are less known for issues with shadow effects. However, especially in areas with differences in surface height (mountainous or desert areas), shadows will appear due to the illumination of elevated structures. The dunes of the Rub’ al Khali desert are a good display of shadows causing difficulties in image interpretation but also indicate what we see: dunes with a height of up to 250 m.

Rolling sand dunes in the expansive Rub’ al Khali desert on the southern Arabian Peninsula are pictured in this radar image from the Sentinel-1A satellite (ESA 2014).

4) Pattern recognition

How objects or “targets” on the ground are arranged spatially strongly influences the way that we associate them. Reoccurring shapes that appear in more or less regular spaces are recognized by the human eye as patterns. Using image analysis techniques, these proportions can be used to classify pixels. By relating objects that are either spatially or shape-wise close to our ‘target’, patterns are characterized by proximity, which is key to every type of classification. A classic example are pivot irrigation (PI) systems, that are used to yield crops in drier ecosystems.


Sources & further reading

Jensen, J.R. (2007²). Remote Sensing of the Environment. An Earth Resource Perspective. Upper Saddle River, USA: Pearson Prentice Hall.

FIS Portal (2018). Visual Image Interpretation. <https://www.fis.uni-bonn.de/en/recherchetools/infobox/professionals/image-analysis/visual-image-interpretation>.

Lillesand, T.M., Kiefer, R.W. & Chipman, J.W. (20086). Remote Sensing and Image Interpretation. Hoboken, USA: Wiley & Sons Inc.



Objects and Pixels

Learning objectives of this topic

  • Pixel-based vs. object-based remote sensing analysis
  • Pixel assignment to form clusters/objects
  • Why use object-oriented algorithms?
  • Introduction to segmentation procedures

In this topic we will take a look at the difference between the analysis of single pixels and clusters of these image cells. While single pixels solely take into account the values which are related to the respective image cell, object-based analysis will quantify the statistics that are inherent to a sequence of, usually, neighboring cells.

One pixel, one class

Pixels of a Landsat image being represented through individual pixels and related digital numbers. Note that the size of image cells is not representative of the given scale of the Landsat scene. Source: Phiri & Morgenroth 2017.

Looking at the graphic above, we can see that every pixel has a unique value. Whether it is a digital number (DN) or a complex time series data-based quantity such as a multi-temporal standard deviation, we can use this value in order to classify the image cell and consequently assign it to its designated class.

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

The assignment of a pixel into its designated class depends on the values it features for certain parameters/predictors. This simple video describes how a certain combination of parameters (e.g. NDVI and brigthness) define the class membership.

Which class is chosen is subject to the method that is being used, whether it is based on manual definition or completely automated (e.g., an unsupervised classification or a deep learning based algorithm). Pixel-based analysis enables a detailed picture of the remote sensing data set we are looking at. While the research is able to exploit the data at maximum cell size, several important parameters are not taken into account, such as:

  • The shape of objects or ‘targets’, which makes certain areas of interest (AOI) easily distinguishable, such as building or regularly shaped pivot irrigation systems.
  • The edges which define the transition from e.g., one land use type to a different one. Typical examples for this appearance are roads which cross through open countrysides while sharply separating two types of land use/cover as displayed in the image below.
A Sentinel-2A true color composite (30th June 2021) displaying the sharp transition from the city of Lulekani, South Africa to open woodlands through a road. Source: EO Browser.
  • The neighborhood of pixels that comprises the statistics or cells that surround a certain pixel. Due to Tobler’s first law of geography (“Everything is related to everything else, but near things are more related than distant things.“), we can assume, that a pixel’s brightness, or whichever quantification it may be, is influenced by its surrounding to a certain degree.

Creating objects from pixels

Objects or ‘cluster’ as they are often referred to The algorithms which can be applied to object-oriented are pretty much identical to those used for pixel-oriented analysis approaches. However, object-oriented classifiers make use of two domains:

  1. Spectral characteristics (same like pixel-oriented approaches)
  2. Neighbourhood relations/statistics

The spatial patterns which are represented as the neighbourhood of a pixel provide extra information, which is not available on the pixel level and as explained earlier it is assumed, that the surroundings of a pixel are likely to resemble the respective image cell itself. If pixels are clustered together to form classification objects, they create coherent or homogenous areas with similar spectral and/or spatial patterns. Such clusters can be created using segmentation algorithms.

Segmentation – Bringing pixels together

Segments are clusters of pixels which are similar to each other in the domains mentioned earlier. The process to create these is called ‘segmentation’ and can be carried out using any commonly used software/programming language such as QuantumGIS, R, Python or GDAL. Today, it is commonly used in computer science and image analysis. The difference of a classification, a detection and finally a image segmentation is visualized below.

(a) The human eye can thematically separate different targets.
(b) Clusters are detected.
(c) Based on pixel clusters, a segmentation is carried out.

The differences in image analysis based on pixel grouping. Source: Facebook Research 2016.

In essence, a segmentation produces similar results to what the human eye does all the time, detecting patterns and patches, that look similar for some reason in order to group them and make them visible as entire objects. For example, if you look at a field of grass next to a road, you will instantly group the grass as one and the paved road as the other object. One typical example of the use of segmentation is the identification of individual tree crowns, like in the image below. This case also showcases the use of pixel clustering to provide more meaningful (e.g., from the view of ecological research) information through the analysis of entities that can be attributed as a whole. Single pixels which only represent parts of a tree or that are a mixture of the surroundings are much harder to relate to the characteristics of a complete object.

Consequently, if the spectral proximity and the neighborhood criteria are matched, a segmentation will cluster pixels and create segments, usable for remote sensing analysis.

Different objects with identical shape but different spectral characteriscs will be grouped/segmented to form clusters.

Sources & further reading

Elachi, C. & van Zyl, J. (2015²). Introduction to the Physics and Techniques of Remote Sensing. Hoboken, USA: John Wiley & Sons, Inc.

Facebook Research (2016). Learning to segment. <https://research.fb.com/blog/2016/08/learning-to-segment/>.

Jensen, J.R. (2007²). Remote Sensing of the Environment. An Earth Resource Perspective. Upper Saddle River, USA: Pearson Prentice Hall.

Phiri, D. & Morgenroth, J. (2017). Developments in Landsat Land Cover Classification Methods: A Review. In: Remote Sensing 9(9), 967. https://doi.org/10.3390/rs9090967.

Rees, W.G. (2010²). Physical Principles of Remote Sensing. Cambridge, USA: Cambridge University Press.

Schowengerdt, R.A. (2007³). Remote Sensing. Models and Methods for Image Processing. San Diego, USA: Academic Press.


In this topic you learned, why it can be an advantage to use clusters instead of single pixel in some application of remote sensing. Now move on to the next topic of this lesson to dive into classification procedures with some practical application.