MGEM Mixed Pixel Lab

Author

Ramon Melser

Introduction

Welcome to the Malcolm Knapp Research Forest! During your time in the MGEM program, you will be exposed to a wide range of remote sensing and GIS technologies, data sets and workflows that equip you to answer questions about our environment. As you will learn/have learned in GEM 520, remote sensing data sets can typically be characterized by three core elements: temporal resolution, spatial resolution, and spectral resolution. To review:

Temporal resolution refers to the revisit time of a sensor, aka how long it takes to complete full coverage of the earth for satellite based sensors. How quickly a satellite completes full coverage of the earth is determined the its’ orbit - the higher the satellite, the longer it takes to complete an orbit. There are 3 key types of earth orbits: low earth orbits, medium earth orbits, and high/geosynchronous orbits. Low Earth Orbit satellites are commonly 160-2,000 km above the earth, and complete a full orbit between 90 minutes to 2 hours. In a 24-hour period, low orbit satellites tend to cover the the earth twice, providing daytime and nighttime observations. As a result, low earth orbits are ideal for scientific and weather programs that require high temporal resolutions. The altitude of Medium Earth Orbit satellites typically falls between 2,000 to 35,000 km. Most famously, the satellites of the Global Positioning System (GPS) are in a medium earth orbit called the ‘semi-synchronous’ orbit (26,560 km), which takes 12 hours to complete. Finally, High Earth Orbits are characterized by an altitude exceeding 35,780 km above the earth. At this altitude, the orbit of the satellite matches the rotation of the Earth. Since this results in a relatively consistent observation swath on the earth’s surface, these orbits are commonly referred to as ‘geosynchronous’ orbits. If you are curious about the orbits of certain satellites, I highly encourage you to spend some time on SatelliteXplorer, where you can visualize the orbit of specific missions, and track satellites’ live locations!

The spatial resolution of a sensor refers to the dimensions of a pixel captured by that sensor (the higher the resolution, the smaller the pixel). Depending on instrument design and orbit, satellite-based remote sensing platforms may provide data with resolutions ranging from coarse (i.e. 250m - 1km MODIS Pixels) to fine scales (i.e. 3m Planetscope). As a general rule of thumb, high spatial resolutions typically come at the cost of temporal resolutions. For example, the instruments aboard MODIS capture 36 bands at spatial resolutions ranging from 250m to 1km, with a global revisit time of 1-2 days. Landsat-8 on the other hand, delivers observations of 8 bands at 30m (as well as 1 band at 15m and 2 bands at 100m). The global revisit time of Landsat a Landsat satellite is 16 days. Since these satellites operate in a constellation, we are able to obtain full global coverage every 8 days.

Finally, the spectral resolution refers to unique portions of the electromagnetic spectrum captured by a sensor, quantifying both the number and width of spectral bands. You may recall that multispectral imagery typically refers to sensors capturing 3-10 bands, whilst hyperspectral sensors can capture hundreds of bands. Using these bands, we can extract valuable information about land cover, moisture, vegetation vigor, etc. In this lab, we will use the Red, Green, and Blue bands to visualize ‘True Colour’ images of the MKRF research forest. In addition, we will use the Normalized Difference Vegetation Index (NDVI), which uses the Red and Near-Infrared bands to quantify vegetation ‘greenness’. In brief, healthy vegetation absorbs most of the visible light (Red band) and reflects most of the Near-Infrared bands, with the inverse true for unhealthy or sparse vegetation.

Mixed Pixel Problem

Regardless of the spatial resolution of a chosen remote sensing data product, it is always important to remember that each pixel represents an aggregated spectral response based on all land cover within the pixel. For example, a 30m Landsat pixel may capture the spectral signature of multiple landscape features like a river, building and forest edge. This phenomenon is commonly referred to as the ‘mixed pixel problem’, and is an important consideration in remote sensing applications.

Landscape-level analysis of satellite data often requires that pixels be classified using comprehensive categories or descriptors. In the example shown above, we may wish to classify water (blue), buildings (red), grass (green), and sand (yellow). In this exercise, you will simulate the spatial resolutions of three popular satellite remote sensing platforms: PlanetScope, Sentinel-2, and Landsat. (Some basic information for these satellites is summarized in the table below, with additional information available via the links provided under the table of contents.) By mapping out “pixels” on the landscape at MKRF, you will investigate the effect of the mixed pixel problem on your ability to classify the landscape into meaningful categories. The main goals for the day are a) to experience what the spatial resolution of some global satellite data sets look like on the ground, and b) to understand the limitations of representing complex land cover through the classification of satellite data pixels.

Pixel Mapping

The first part of this exercise involves mapping out your own ‘pixels’ in the MKRF research forest, and observing the landscape features that each of these pixels contain. For this exercise, you need to form into 9 groups, which will be provided with a compass and transect tape. You will also need to assign 1 note-taker to mark down your observations in the field. To guide you in this exercise, we have laid out 9 points that will serve as your ‘field sites’ - these sites are marked on the interactive map below. Each group will be responsible for analyzing 6 sites (group 1: sites 1-6, group 2: sites 2-7, … group 9: sites 9, 1, 2, 3, 4 & 5). Before heading out, take a few minutes to explore the map and its’ layers. You will notice that you can ‘slide’ between true colour and NDVI visualizations: you will use this functionality later in the exercise, but don’t need to focus on this when you’re in the field.

When you are ready:

  1. Locate your first study site on the interactive map in Part 2. In the field, these sites will be marked with a cone. You can also enable your live GPS location on the map in case you are not sure if you are in the right place.
  2. Map out a 3-meter PlanetScope pixel around the cone, using a compass and the transect tape provided. Orient your imaginary grid towards true north. Mark the corners of the pixel with your group members. (HINT: the magnetic declination at Loon Lake is +16°). You will have to adjust your compass accordingly. If you are using a compass app on your phone, make sure that true north is enabled.)
  3. Repeat step 3 for a 10-meter Sentinel 2 pixel and a 30-meter Landsat pixel.
  4. Decide if the pixel is mixed or homogeneous and note down your response.
  5. As a group, discuss and record the features visible on the landscape.
  6. Based on the recorded features, come up with a land cover class to assign to for each platform, in each pixel. This step is somewhat subjective; you can disagree with your group members!

Discussion Questions

  1. Were there any sites dominated by one particular land cover class across all three resolutions? Discuss with your group in context of the mixed pixel problem.
  2. Imagine each pixel in the year 2000. Look for clues about the site’s history. Do you think that you would have assigned it to a different land cover class 20 years ago?

Once you are done filling out the table by the end of the lab, click the ‘pdf’ button to export your table.

Imagery Comparison

Now that you have taken some detailed field notes for each of the sites, we will return to the classroom. There, you will compare your observations to the Landsat, Sentinel-2 and Planetscope satellite imagery of MKRF displayed in the interactive map above. In you groups:

  1. Locate each site on the images of the study area and identify the pixel in the imagery corresponding to the site.
  2. Describe the pixel in the datasheet (pixel characteristics column). What is its color? Does it have high or low reflectance? (If you’re color blind, don’t worry about wavelength. Just consider how much light is being reflected.)
  3. Look at the NDVI images and estimate the value for the pixel at each site.

Discussion Questions

  1. Why do you think that the range of NDVI values differs so much between sensors?

  2. Do you see much difference in NDIV values between sparsely and densely vegetated areas? Why do you think this is?

Unsupervised Classification

Now that we have some understanding of the mixed pixel problem across various pixel resolutions, let’s investigate how these principles impact our ability to classify remote sensing data into meaningful classes. As you will learn in GEM 520, there are two core classification approaches: supervised and unsupervised classification. In brief, supervised classification leverages a set of training data to classify pixels. For example, you may attribute some point data with land cover classes based on a field survey or photo interpretation, and then train a model which assigns forested VS non-forested classes based on NDVI values. Unsupervised classification instead classifies an image based on groups of pixels which share spectral properties, which are assigned labels afterwards. In the example below, we have performed an unsupervised classification on the MKRF Planetscope data (RGB & NIR bands). In your groups, compare the classification to the RGB imagery, and assign some meaningful names to each identified class in the provided table. Once you have completed this step, zoom in to the plots we visited yesterday, and answer the discussion questions.

Discussion Questions

  1. Did the class names you assigned correspond to the land cover notes you took at your plots?
  2. Do you think the unsupervised classification accurately represents the key land cover types of MKRF? Why or why not?
  3. Would you retroactively change any of the land cover notes you took in the field, now that you have seen the classified map?