Pan Sharpening

Advanced Use Cases | | PanSharpening Example


Satellite imagery usually does not come as plain RGB image. Instead, it comes as multiple intensity bands, where each band represents a certain narrow frequency range. The bands typically range from thermal infrared over near infrared to visible red, green and blue. Most satellites do also carry a panchromatic (b/w) detector which delivers better spatial resolution at the expense of a wider frequency spectrum. For example the Landsat 7 (ETM+) satellite provides the visible bands with 30m resolution (spectrum spread ca. 100nm), but the panchromatic channel is provided with 15m resolution spanning the near infrared to blue spectrum (spectrum spread ca. 400nm).

Pansharpening is the task of merging color imagery with hi-res panchromatic imagery of the same region, so that the resolution of the color imagery is lifted to the level of the panchromatic imagery, hence it is panchromatically sharpened. The procedure is as follows:

Let r,g,b and n be the red green blue and nir intensities of the respective imagery bands. And let p be the panchromatic intensity. In the simplest case we just perform the so-called Brovey transformation, dividing the r,g and b intensities by the total intensity i=(r+g+b)/3 and multiplying with p.

In the case of Landsat imagery, we get unnatural colors though. This is due to the fact, that the panchromatic channel not only spans the visible spectrum but also covers the near infrared spectrum, thus we need to calculate the total intensity as i=(r+g+b+n)/4. Still we get unsatisfactory results, because the bands do not contribute evenly to the panchromatic channel and may exhibit additional scale and bias. To find out the exact contribution of each channel we assume a linear relationship given as

$\displaystyle{ p = alpha \cdot n + beta \cdot r + gamma \cdot g + delta \cdot b + kappa }$

The constants alpha, beta, gamma, delta and kappa can be determined by multiple regression. With those, the pansharpened intensities are computed as

$\displaystyle{ r|g|b' = r|g|b \cdot \frac{p} { alpha \cdot n + beta \cdot r + gamma \cdot g + delta \cdot b + kappa } }$

The merger application applies the above principle. The five input channels pan, nir, red, green and blue are specified on the command line and one gets the pansharpened imagery as output. For convenience, the data directory contains sample imagery in the data/landsat directory. The sample channels are small crops of the original huge Landsat 7 bands of Bavaria, Germany. The original imagery is available at the Global LandCover Facility (GLCF, path=193 row=26 year=1999, FTP). In Landsat notation, the channels panchro, nir, red, green and blue correspond to the bands 80, 40, 30, 20 and 10. For the particular sample data, the multiple regression analysis of the bands reveals that

$\displaystyle{ p = 0.363 \cdot n + 0.181 \cdot r + 0.062 \cdot g + 0.124 \cdot b + -0.021 }$

The command line to perform the according pansharpening operation with the merger tool is:

grid/merger pan.tif nir.tif red.tif green.tif blue.tif pansharpened.tif

The resulting pansharpened output image still does not look natural, because the input channels are not normalized and pretty dark. To yield bright natural colors the output needs to be processed further with a standard image manipulation tool such as gimp or imagemagick. Typically, a brightness/gamma correction together with a tonal adjustment will usually suffice.

If a true-color background image is available, the pansharpened image can be matched with the background to yield the exact same naturally looking colors. This is described in the following ColorMatching page.

Advanced Use Cases | | PanSharpening Example