Lesson 1, Topic 1
In Progress

Radar Backscatter and Backscatter Mechanisms Copy

What is Radar Backscatter?

In this topic you will learn about the main concepts that apply in every radar imaging system. 

The concept of the term ‘backscatter‘ is often described as the ‘backscatter coefficient‘ (σ0) in remote sensing applications. It provides information about the amount of radiation that is received from a certain illuminated location on our Earth’s surface at the active microwave remote sensing system.

Radar images generally consist of pixels, which represent the amount of backscattered energy from a specific area on the ground. As with passive microwave energy, the physical properties of objects on the Earth’s surface determine the amount and characteristics of the backscatter returned to the sensor.

If you look at the image below, you will see that the brightness varies significantly between different land cover/use types. Darker areas in the image generally represent low backscatter, while brighter areas feature high backscatter. Water surfaces usually appear in the darkest tones due to specular reflection leading to no or very little radiation being returned to the sensor. Urban structures are often displayed in bright pixels as the artificial structures intensify the microwave radiation due to dihedral scattering. These scattering mechanisms will be introduced later in the course.

Variations of backscatter: Urban areas appear bright (high backscatter), while water bodies display the lowest backscatter signal due to specular reflection.
Source: Teaching materials of Dr. Thomas Jagdhuber (DLR)

Which factors influence the amount of backscatter?

The main factors that impact the strength of a backscatter signal are viewing geometry, wavelength, surface roughness, dielectric characteristics and the orientation of the observed surface. Throughout this course you will gain more insights into the individual variables limiting our radar signal.

Viewing Geometry

Source: NASA (2010).

Wavelength of the Electromagnetic Radiation

Surface Roughness of the Observed Area

Dielectric Characteristics & Orientation

Description of radar backscatter

Which term is used to describe the backscatter signal depends on its state and if it is preprocessed or not. The most common ways of describing backscatter are given in the table below.

Table displaying the most relevant terms describing radar backscatter signals.

Sigma Naught (σ0) is the most commonly used backscatter expression and often referred to as ‘backscatter coefficient’. However, there are other terms, which are frequently used in remote sensing applications to describe backscatter such as Beta Naught (β0) or Gamma Naught (γ0).

The main difference between these naming conventions is the reference area forming the basis of the backscatter variable. Variations in brightness that are dependent on the incidence angle tend to be strongest in β0, corrected but still present in σ0 and further reduced in γ0. To correct this problem, the backscatter coefficient σ0 can be calculated using the local incidence angle instead of the look angle. 

The figure below visualizes the respective reference area for all three naming conventions.

Normalization areas for SAR backscatter.
Source: Small (2011) .

Backscatter variables and their reference areas

  • β0 – slant range plane
  • σ0 – ground modelled using an ellipsoidal Earth surface
  • γ0 – plane perpendicular to the local look direction

Backscatter Mechanisms

Scattering mechanisms?

Microwaves interact with objects on the ground. In fact, that is what we are trying to measure with our radar satellites, in order to distinguish various materials and objects on the ground. To be able to do that and to fully understand the backscatter signal, we have to understand the different types of scattering that can occur on the Earth’s surface.


By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

The scattering mechanisms in more detail

The concept of scattering mechanisms is at the heart of understanding the signals that are returned to the sensor, after the microwave pulses hit the Earth’s surface. Let’s go through these mechanisms again to internalize them further. You will need to remember those in order to understand principles explained later on during this course.
In the following, we will go through them one by one and you will get the chance to explore these mechanisms interactively.

Click on the animations below to see them full screen.

Surface scattering

A single bounce, or surface scattering, appears when the microwave hits a ‘rough’, heterogeneous surface. Parts of the energy are scattered back to the sensor, some away from the sensor. Which surface is considered ‘rough‘ and which isn’t is determined by the wavelength, the incidence angle and the spatial resolution of the system. These variables are also referred to as ‘sensor parameters’. We will learn more about these parameters in the following topics.

The strength of surface scattering depends on the ground properties (e.g. surface roughness, dielectric constant, topography) and sensor parameters (e.g. wavelength, incidence angle). An example of the relationships between surface backscatter and surface roughness is shown in the following figure. The simplest form of surface scattering is the specular reflection introduced previously. As we can see, with increasing of surface roughness increases the diffuse scattering. Furthermore, greater wavelengths lead to a smoother appearance of the surface for a sensor.

Relationship between surface backscatter and surface roughness.
Source: Richards (2009)

Specular reflection

If the radar pulse hits a smooth, flat surface (on the scale of the wavelength), most of the energy is scattered away in a specular direction. These areas will appear very dark in the radar image as no energy (or extremely little amounts) is returned to the radar instrument. Typical examples for specular reflection are smooth water surfaces or tarmac (e.g. roads, parking lots).

Double bounce

Double bounce, or dihedral scattering, occurs when the radar pulse hits two relatively smooth surfaces that are perpendicular to each other. The returned signal is particularly strong, due to the multiple transmissions of energy back into the direction of the sensor.
Typical examples where double bounce occurs are buildings and other artificial structures, thus anthropogenic structures.

Volume scattering

Volume scattering occurs if the radar pulse penetrates into a three-dimensional body and interacts with particles sensitive to the respective wavelength. The energy is scattered various times in multiple directions, before parts of it are eventually returned to the sensor.
Typical examples of volume scattering occurs along dry snow surfaces, tree canopies or vegetated fields.

What do we see here?

The figure below visualises the use of a more advanced SAR methodology called ‘polarimetry‘ to locate different backscatter processes from remotely sensed data (in this case quad-polarimetric L-band data acquired by DLR’s E-SAR system).

On the lower left, you can see a Freeman and Durden decomposition which can be utilized to display different types of scattering mechanisms. This is a popular model-based decomposition that uses canonical scatter models to describe surface scattering (s), double-bounce scattering (d) and volume scattering (v), which correspond to blue, red and green in our example respectively.

Based on this knowledge it is then possible to perform a classification (Wishart) as a next step. The result is shown on the right side. Thus, it is possible not only to say which scattering process took place but also to quantify its intensity.

(a) Freeman and Durden decomposition (L-band, Oberpfaffenhofen test site). (b) 16-class Wishart classification initialized by using the three classes derived by the dominant Freeman and Durden amplitudes.
Source: Moreira et al. (2013)

Scattering models

We have developed three scenarios for radar backscatter. These explorable explanations are designed to help you understand the scatter mechanisms. Play around with the sliders and menus to see how radar pulses behave as they hit certain surfaces.

Field scenario

City scenario

Snow scenario

The Radar Equation

What does the radar equation describe?

For every radar system, the performance is based on how well it can measure the echo returning from an object back to the sensor. To be more precise, it is important to know if the echo can be quantified given the systems measurement noise. In order to determine the proportion of energy that is being returned from our target we can use the radar equation. The intensity of radiation received at the sensor is always proportional to the amount of energy that was initially emitted from our radar system (the relationship between transmitted and received radiation is linear). To accomplish this, we can use the radar equation (which can be found in various forms, describing the same processes).

In this video Prof. Dr. Iain Woodhouse will explain the basic concepts concerning the radar equation in remote sensing applications.


By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

In the image below you can see the radar equation and its main variables.

If you want to find out how the radar equation can be derived and where these parameters come from please visit Radartutorial.eu – The Radar Equation

The radar equation

Variables affecting the radar equation…

Emitted Energy

Active radar systems emit energy based on wavelength and the related frequency. Since the loss of energy density of the emitted radiation is quite high (distance r  to the power of 4), the signal we get back from our radar transmitter is extremely small.
Example: For a distance r  of 100 m, the returning echo would be 100,000,000 times smaller than the originally emitted energy.

Antenna Gain

The antenna gain, referred to as G,  defines the so-called ‘antenna effective area’ and describes how well an antenna collects the echoed energy. This variable can be subject to effects of instrument noise, caused by temperature changes or other effects occurring over time. To correct for this, internal calibrations are carried out for changes in the electronics gain as well as validating the antenna model. The resulting calibration data can then be used in ground processing to correct image data.


The wavelength is an important factor, affecting how much energy is returned to the sensor from a target on the Earth’s surface. Due to the relationship between wavelength and object geometry, the amount of returned radiation is strongly depending on what the sensor is imaging.

Normalized radar cross section

The normalized radar cross section is a unitless measure of the reflective strength of a distributed radar target (scatterer). It is defined as the proportion of energy transmitted back to the sensor compared to the return of an idealistic isotropic scatter (which does rarely exist naturally). In extreme double-bounce scenarios σ0 can be greater than what was initially

Spatial Resolution

This term relates to the pixel size that is being used for a radar sensor. To be more specific, describes the area over which the energy that is received is being integrated. Thus, greater cell resolutions including more scatter targets are also characterized by greater energy return towards the sensing instrument.
By normalizing the received signal to a related area on the ground, we can compare backscatter intensities of various radar sensors with different spatial resolutions.

Change in Energy Density

The lower part of our radar equation describes the rate at which the amount of energy is dropping for a certain area over a given distance. As a basis for this calculation we use a spheric surface, which is defined as 4π and the square of the radius r (sensing instrument). The radius corresponds to the spread of the energy and can be calculated by multiplying the speed of light c times the time span t we measure for any emitted pulse to reach any target on the ground.
Since the signal has to reach not only the target, but actually return to the sensor, this term needs to be considered twice. Consequently, the loss of energy is equal to to the power of 4.

What is Speckle?

Salt’n pepper?

No, this is not a cooking lesson, but speckle certainly makes the interpretation of SAR imagery more spicy. If you look closely at any radar image you will find that even for a single surface type, grey level variations may occur between adjacent resolution cells. These variations create a heterogeneous texture, characteristic of any radar image. This effect, caused by the coherent radiation used by radar systems, is called ‘speckle’. Due to the significant changes of brightness between neighbouring pixels this process is also related to as ‘salt and pepper’ effect.


By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Where does speckle originate from?

The effect of speckle in SAR imagery seems somewhat ‘random’ in its distribution and appearance. In fact it is closely related to principles underlying each and every SAR measurement. It may appear as noise, but it is not.

For each resolution cell of a SAR image there is a great number of scatterers (depending on the wavelength). This leads to an integrated response for these pixel in amplitude and phase that is not deterministic and quite randomly in its distribution. Thus, the observed backscatter in each pixel can be seen as the result of the convolution of SAR impulses with a coherent contribution of all scatter targets in that respective resolution cell.


By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

How do we tackle speckle effects in SAR imagery?

To mitigate the effects of speckle in our SAR data there are numerous methods available.

Firstly, the choice of the spatial resolution influences the amount of speckle. As cell size decreases, the number of elemental scatterers decreases likewise.

Secondly, independent from our ground resolution we can use filter techniques to remove portions of the ‘salt’n pepper’ effects. An overview of the available options for data correction is given below.


The first option to reduce speckle is the process of multi-looking. Here, observation of the same target are averaged to smooth out speckle effects. The so-called ‘looks‘ basically represent smaller sub-apertures of the original aperture, which are averaged and processed separately. However, the factor at which the speckle variance is reduced within the data relates to a loss in spatial resolution as well. Therefore, multi-looking is always a trade-off between spatial detail and minimization of grey level variations as you can see in the figures below.

Original Sentinel-1 VH intensity
Multi-looked (2) Sentinel-1 VH intensity
Multi-looked (4) Sentinel-1 VH intensity

Speckle filtering

To preserve the original resolution of a SAR image, filters applying moving window are often used. A moving window filter changes the intensity of the central pixel depending on the intensities of all the pixels within the window.

Many speckle filters rely on two basic assumptions:

  •  spatial averaging of smaller elements over a large homogeneous region is equal to averaging many measurements of the whole surface,
  • the SAR image that is filtered, consists of large areas with homogeneous radar cross sections.

Different algorithms with varying degrees of complexity have been proposed to properly shape the impulse response of the filter within the window. Some of the most common single date speckle filters are explained below.

Try it yourself!

See the effects of different speckle filters on SAR data with this interactive tool.
Use the Buttons below to change the filter type. Drag the slider to change the window size of the filter.

Filter size 3x3


The median filter calculates the median value in each cell for a pre-defined window size. This filter does preserve edges quite well and is known for its applicability to speckle effects. While it does visually improve the SAR image, it does not provide an estimate of the mean radar cross section. This means that it will not influence the speckle and image statistics in a physically meaningful way.


The Lee filter, named after J.S. Lee, overcomes some of limitations that are valid for boxcar filters. Based on a linear speckle model , this adaptive filter determines the unspeckled intensity to minimize the mean squared error. To preserve relatively sharp edges, the Lee filter utilizes directional masks to locate the most homogeneous parts of the respective sliding window, which is used to calculate local statistics.


Simple boxcar filters represent a direct application of incoherent averaging. For homogeneous areas this method provides a good filter performance. Boxcar filters lead to enhanced contrast and lower randomization of grey level values. Main limitations are blurring effects along sharp edges and averaging of pixels irrespective of their properties with respect to SAR scene. Contrary to other common filtering methods, the spatial resolution of the image cells is also degraded.


This filtering method is quite similar to the Lee filter, as it also based on the minimum mean square error criterion. However, the Frost filter does not use a linearised weighted model. The performance is mainly controlled by a tuning factor that defines how well the filter suppresses speckle. Thus, a better speckle reduction is often related to a loss in edge preservation.