MGEM Mixed Pixel Lab
Introduction
Welcome to the Malcolm Knapp Research Forest! During your time in the MGEM program, you will be exposed to a wide range of remote sensing and GIS technologies, data sets and workflows that equip you to answer questions about our environment. As you will learn/have learned in GEM 520, remote sensing data sets can typically be characterized by three core elements: temporal resolution, spatial resolution, and spectral resolution. To review:
Temporal resolution refers to the revisit time of a sensor, aka how long it takes to complete full coverage of the earth for satellite based sensors. How quickly a satellite completes full coverage of the earth is determined the its’ orbit - the higher the satellite, the longer it takes to complete an orbit. There are 3 key types of earth orbits: low earth orbits, medium earth orbits, and high/geosynchronous orbits. Low Earth Orbit satellites are commonly 160-2,000 km above the earth, and complete a full orbit between 90 minutes to 2 hours. In a 24-hour period, low orbit satellites tend to cover the the earth twice, providing daytime and nighttime observations. As a result, low earth orbits are ideal for scientific and weather programs that require high temporal resolutions. The altitude of Medium Earth Orbit satellites typically falls between 2,000 to 35,000 km. Most famously, the satellites of the Global Positioning System (GPS) are in a medium earth orbit called the ‘semi-synchronous’ orbit (26,560 km), which takes 12 hours to complete. Finally, High Earth Orbits are characterized by an altitude exceeding 35,780 km above the earth. At this altitude, the orbit of the satellite matches the rotation of the Earth. Since this results in a relatively consistent observation swath on the earth’s surface, these orbits are commonly referred to as ‘geosynchronous’ orbits. If you are curious about the orbits of certain satellites, I highly encourage you to spend some time on SatelliteXplorer, where you can visualize the orbit of specific missions, and track satellites’ live locations!
The spatial resolution of a sensor refers to the dimensions of a pixel captured by that sensor (the higher the resolution, the smaller the pixel). Depending on instrument design and orbit, satellite-based remote sensing platforms may provide data with resolutions ranging from coarse (i.e. 250m - 1km MODIS Pixels) to fine scales (i.e. 3m Planetscope). As a general rule of thumb, high spatial resolutions typically come at the cost of temporal resolutions. For example, the instruments aboard MODIS capture 36 bands at spatial resolutions ranging from 250m to 1km, with a global revisit time of 1-2 days. Landsat-8 on the other hand, delivers observations of 8 bands at 30m (as well as 1 band at 15m and 2 bands at 100m). The global revisit time of Landsat a Landsat satellite is 16 days. Since these satellites operate in a constellation, we are able to obtain full global coverage every 8 days.
Finally, the spectral resolution refers to unique portions of the electromagnetic spectrum captured by a sensor, quantifying both the number and width of spectral bands. You may recall that multispectral imagery typically refers to sensors capturing 3-10 bands, whilst hyperspectral sensors can capture hundreds of bands. Using these bands, we can extract valuable information about land cover, moisture, vegetation vigor, etc. In this lab, we will use the Red, Green, and Blue bands to visualize ‘True Colour’ images of the MKRF research forest. In addition, we will use the Normalized Difference Vegetation Index (NDVI), which uses the Red and Near-Infrared bands to quantify vegetation ‘greenness’. In brief, healthy vegetation absorbs most of the visible light (Red band) and reflects most of the Near-Infrared bands, with the inverse true for unhealthy or sparse vegetation.
Mixed Pixel Problem
Regardless of the spatial resolution of a chosen remote sensing data product, it is always important to remember that each pixel represents an aggregated spectral response based on all land cover within the pixel. For example, a 30m Landsat pixel may capture the spectral signature of multiple landscape features like a river, building and forest edge. This phenomenon is commonly referred to as the ‘mixed pixel problem’, and is an important consideration in remote sensing applications.
Landscape-level analysis of satellite data often requires that pixels be classified using comprehensive categories or descriptors. In the example shown above, we may wish to classify water (blue), buildings (red), grass (green), and sand (yellow). In this exercise, you will simulate the spatial resolutions of three popular satellite remote sensing platforms: PlanetScope, Sentinel-2, and Landsat. (Some basic information for these satellites is summarized in the table below, with additional information available via the links provided under the table of contents.) By mapping out “pixels” on the landscape at MKRF, you will investigate the effect of the mixed pixel problem on your ability to classify the landscape into meaningful categories. The main goals for the day are a) to experience what the spatial resolution of some global satellite data sets look like on the ground, and b) to understand the limitations of representing complex land cover through the classification of satellite data pixels.
Pixel Mapping
The first part of this exercise involves mapping out your own ‘pixels’ in the MKRF research forest, and observing the landscape features that each of these pixels contain. For this exercise, you need to form into 9 groups, which will be provided with a compass and transect tape. You will also need to assign 1 note-taker to mark down your observations in the field. To guide you in this exercise, we have laid out 9 points that will serve as your ‘field sites’ - these sites are marked on the interactive map below. Each group will be responsible for analyzing 6 sites (group 1: sites 1-6, group 2: sites 2-7, … group 9: sites 9, 1, 2, 3, 4 & 5). Before heading out, take a few minutes to explore the map and its’ layers. You will notice that you can ‘slide’ between true colour and NDVI visualizations: you will use this functionality later in the exercise, but don’t need to focus on this when you’re in the field.