<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wonkpedia.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Vruba</id>
	<title>Wonkpedia - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wonkpedia.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Vruba"/>
	<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php/Special:Contributions/Vruba"/>
	<updated>2026-04-30T01:55:53Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2218</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2218"/>
		<updated>2022-07-21T01:58:55Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Processing levels */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly accurate, precise,&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Accuracy_and_precision&amp;lt;/ref&amp;gt; global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But as a sketch, processing levels are something like:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Bands are just 2D arrays of numbers. A typical RGB image from a phone, for example, can be defined as a stack of 3 × 2D arrays of brightness values, one each for red, green, and blue. In virtually all cases, a larger number means more energy was detected at the point represented by that pixel.&lt;br /&gt;
&lt;br /&gt;
Satellite image data usually comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, TIFF, and GeoTIFF. The GDAL library, which is included in QGIS and has command-line tools, can read virtually any reasonable format. If you have a choice, GeoTIFF is a good option – it’s an open standard&amp;lt;ref&amp;gt;http://docs.opengeospatial.org/is/19-008r4/19-008r4.html&amp;lt;/ref&amp;gt; that defines a way of using TIFF metadata to encode geolocation.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function – a reversible, one-to-one relationship – from a sphere to a plane. (If you know enough about geodesy mutter “the spheroid, actually”, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data in a 2D way.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the rectangle of data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
Many data deliveries specify a value (such as 0, the largest possible value, or a floating point NaN) as a “nodata value”, meaning it’s a fill for undefined values and should not be interpreted as a real reading. This can also be done with an alpha channel, for example. It allows any shape of data to be delivered within a rectangle, as virtually all image and multidimensional array formats require. The nodata value is usually obvious even without referring to the metadata; if it’s present at all, it’s usually in an sensible configuration like along the edges. In rare cases, you might see a data delivery with random, noncontiguous nodata pixels (for example, where there’s a data error, or where things that should overlap don’t), and you’ll want to recognize them as such instead of thinking “Huh, there’s a black [or white, or hot pink] patch here! How fascinating! I must investigate more deeply!”&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the exact 3D location and angle of the phone, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user – even if it’s as simple as telling GDAL or QGIS to project it – but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll use EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (essentially equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard used by GPS and which defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in a reasonable projection as it arrives from the data provider. If reasonably possible, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection is a cue to take a step back and think about what you’re doing.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, custom coefficients is the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2217</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2217"/>
		<updated>2022-07-21T01:57:40Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Georeferencing and orthorectification */ one inline ref done, and only 11,324 to go&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly accurate, precise,&amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Accuracy_and_precision&amp;lt;/ref&amp;gt; global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Bands are just 2D arrays of numbers. A typical RGB image from a phone, for example, can be defined as a stack of 3 × 2D arrays of brightness values, one each for red, green, and blue. In virtually all cases, a larger number means more energy was detected at the point represented by that pixel.&lt;br /&gt;
&lt;br /&gt;
Satellite image data usually comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, TIFF, and GeoTIFF. The GDAL library, which is included in QGIS and has command-line tools, can read virtually any reasonable format. If you have a choice, GeoTIFF is a good option – it’s an open standard&amp;lt;ref&amp;gt;http://docs.opengeospatial.org/is/19-008r4/19-008r4.html&amp;lt;/ref&amp;gt; that defines a way of using TIFF metadata to encode geolocation.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function – a reversible, one-to-one relationship – from a sphere to a plane. (If you know enough about geodesy mutter “the spheroid, actually”, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data in a 2D way.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the rectangle of data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
Many data deliveries specify a value (such as 0, the largest possible value, or a floating point NaN) as a “nodata value”, meaning it’s a fill for undefined values and should not be interpreted as a real reading. This can also be done with an alpha channel, for example. It allows any shape of data to be delivered within a rectangle, as virtually all image and multidimensional array formats require. The nodata value is usually obvious even without referring to the metadata; if it’s present at all, it’s usually in an sensible configuration like along the edges. In rare cases, you might see a data delivery with random, noncontiguous nodata pixels (for example, where there’s a data error, or where things that should overlap don’t), and you’ll want to recognize them as such instead of thinking “Huh, there’s a black [or white, or hot pink] patch here! How fascinating! I must investigate more deeply!”&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the exact 3D location and angle of the phone, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user – even if it’s as simple as telling GDAL or QGIS to project it – but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll use EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (essentially equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard used by GPS and which defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in a reasonable projection as it arrives from the data provider. If reasonably possible, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection is a cue to take a step back and think about what you’re doing.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, custom coefficients is the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2216</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2216"/>
		<updated>2022-07-21T01:22:17Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* DN and PN */ clarification&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Bands are just 2D arrays of numbers. A typical RGB image from a phone, for example, can be defined as a stack of 3 × 2D arrays of brightness values, one each for red, green, and blue. In virtually all cases, a larger number means more energy was detected at the point represented by that pixel.&lt;br /&gt;
&lt;br /&gt;
Satellite image data usually comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, TIFF, and GeoTIFF. The GDAL library, which is included in QGIS and has command-line tools, can read virtually any reasonable format. If you have a choice, GeoTIFF is a good option – it’s an open standard&amp;lt;ref&amp;gt;http://docs.opengeospatial.org/is/19-008r4/19-008r4.html&amp;lt;/ref&amp;gt; that defines a way of using TIFF metadata to encode geolocation.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function – a reversible, one-to-one relationship – from a sphere to a plane. (If you know enough about geodesy mutter “the spheroid, actually”, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data in a 2D way.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the rectangle of data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
Many data deliveries specify a value (such as 0, the largest possible value, or a floating point NaN) as a “nodata value”, meaning it’s a fill for undefined values and should not be interpreted as a real reading. This can also be done with an alpha channel, for example. It allows any shape of data to be delivered within a rectangle, as virtually all image and multidimensional array formats require. The nodata value is usually obvious even without referring to the metadata; if it’s present at all, it’s usually in an sensible configuration like along the edges. In rare cases, you might see a data delivery with random, noncontiguous nodata pixels (for example, where there’s a data error, or where things that should overlap don’t), and you’ll want to recognize them as such instead of thinking “Huh, there’s a black [or white, or hot pink] patch here! How fascinating! I must investigate more deeply!”&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the exact 3D location and angle of the phone, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user – even if it’s as simple as telling GDAL or QGIS to project it – but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll use EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (essentially equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard used by GPS and which defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in a reasonable projection as it arrives from the data provider. If reasonably possible, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection is a cue to take a step back and think about what you’re doing.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, custom coefficients is the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2215</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2215"/>
		<updated>2022-07-21T01:19:57Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Formats and projections */ I’m so drunk right now&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Bands are just 2D arrays of numbers. A typical RGB image from a phone, for example, can be defined as a stack of 3 × 2D arrays of brightness values, one each for red, green, and blue. In virtually all cases, a larger number means more energy was detected at the point represented by that pixel.&lt;br /&gt;
&lt;br /&gt;
Satellite image data usually comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, TIFF, and GeoTIFF. The GDAL library, which is included in QGIS and has command-line tools, can read virtually any reasonable format. If you have a choice, GeoTIFF is a good option – it’s an open standard&amp;lt;ref&amp;gt;http://docs.opengeospatial.org/is/19-008r4/19-008r4.html&amp;lt;/ref&amp;gt; that defines a way of using TIFF metadata to encode geolocation.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function – a reversible, one-to-one relationship – from a sphere to a plane. (If you know enough about geodesy mutter “the spheroid, actually”, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data in a 2D way.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the rectangle of data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
Many data deliveries specify a value (such as 0, the largest possible value, or a floating point NaN) as a “nodata value”, meaning it’s a fill for undefined values and should not be interpreted as a real reading. This can also be done with an alpha channel, for example. It allows any shape of data to be delivered within a rectangle, as virtually all image and multidimensional array formats require. The nodata value is usually obvious even without referring to the metadata; if it’s present at all, it’s usually in an sensible configuration like along the edges. In rare cases, you might see a data delivery with random, noncontiguous nodata pixels (for example, where there’s a data error, or where things that should overlap don’t), and you’ll want to recognize them as such instead of thinking “Huh, there’s a black [or white, or hot pink] patch here! How fascinating! I must investigate more deeply!”&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the exact 3D location and angle of the phone, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user – even if it’s as simple as telling GDAL or QGIS to project it – but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll use EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (essentially equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard used by GPS and which defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in a reasonable projection as it arrives from the data provider. If reasonably possible, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection is a cue to take a step back and think about what you’re doing.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2214</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2214"/>
		<updated>2022-07-21T01:17:54Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Formats and projections */ various rephrasings and clarifications&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Bands are just 2D or arrays of numbers. A typical RGB image from a phone, for example, can be defined as a stack of 3 × 2D arrays of brightness values, one each for red, green, and blue. In virtually all cases, a larger number means more energy was detected at the point represented by that pixel.&lt;br /&gt;
&lt;br /&gt;
Satellite image data usually comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, TIFF, and GeoTIFF. The GDAL library, which is included in QGIS and has command-line tools, can read virtually any reasonable format. If you have a choice, GeoTIFF is a good option – it’s an open standard&amp;lt;ref&amp;gt;http://docs.opengeospatial.org/is/19-008r4/19-008r4.html&amp;lt;/ref&amp;gt; that defines a way of using TIFF metadata to encode geolocation.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function – a reversible, one-to-one relationship – from a sphere to a plane. (If you know enough about geodesy mutter “the spheroid, actually”, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data in a 2D way.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the rectangle of data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
Many data deliveries specify a value (such as 0, the largest possible value, or a floating point NaN) as a “nodata value”, meaning it’s a fill for undefined values and should not be interpreted as a real reading. This can also be done with an alpha channel, for example. It allows any shape of data to be delivered within a rectangle, as virtually all image and multidimensional array formats require. The nodata value is usually obvious even without referring to the metadata; if it’s present at all, it’s usually in an sensible configuration like along the edges. In rare cases, you might see a data delivery with random, noncontiguous nodata pixels (for example, where there’s a data error, or where things that should overlap don’t), and you’ll want to recognize them as such instead of thinking “Huh, there’s a black [or white, or hot pink] patch here! How fascinating! I must investigate more deeply!”&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the exact 3D location and angle of the phone, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user – even if it’s as simple as telling GDAL or QGIS to project it – but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll use EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (essentially equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard used by GPS and which defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in a reasonable projection as it arrives from the data provider. If reasonably possible, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection is a cue to take a step back and think about what you’re doing.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2213</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2213"/>
		<updated>2022-07-21T00:49:51Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Sar resolves space with time, not with focus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s spatial resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2212</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2212"/>
		<updated>2022-07-20T23:20:46Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Formats and projections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include HDF, NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2211</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2211"/>
		<updated>2022-07-20T22:55:48Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Radiometric */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits (256 to 1024 levels); newer and better ones are typically 12 to 14 (4096 to 16384 levels).&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2210</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2210"/>
		<updated>2022-07-20T22:37:16Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Bundles */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are basically directories with image data files, usually separated by band or polarization (at least at level 1), and metadata files (XML, json, etc.). Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2209</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2209"/>
		<updated>2022-07-20T22:20:53Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Sar resolves space with time, not with focus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a different frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective. SAR is more like side-scan sonar than like an everyday camera.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2208</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2208"/>
		<updated>2022-07-20T22:17:22Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Atmospheric correction */ phrasing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season (sun distance and angle).&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to get to here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the automatic contrast adjustment tool in an image editor like Photoshop, or, to be a little more exact, like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2207</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2207"/>
		<updated>2022-07-20T22:07:40Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Spectral */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2206</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2206"/>
		<updated>2022-07-20T22:05:39Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Formats and projections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_imagery_data_concepts&amp;diff=2205</id>
		<title>Satellite imagery data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_imagery_data_concepts&amp;diff=2205"/>
		<updated>2022-07-20T22:05:09Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Vruba moved page Satellite imagery data concepts to Satellite image data concepts: Arguably slightly better?&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Satellite image data concepts]]&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2204</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2204"/>
		<updated>2022-07-20T22:05:09Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Vruba moved page Satellite imagery data concepts to Satellite image data concepts: Arguably slightly better?&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2203</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2203"/>
		<updated>2022-07-20T22:03:57Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Adding delivery material&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;br /&gt;
&lt;br /&gt;
=== LIDAR ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# 3D (survey-style) lidar&lt;br /&gt;
# 2D (transect-style) lidar&lt;br /&gt;
&lt;br /&gt;
== Image delivery ==&lt;br /&gt;
&lt;br /&gt;
=== Processing levels ===&lt;br /&gt;
&lt;br /&gt;
In theory, data processing levels are standard across the industry. In practice, different providers tend to make up their own definitions as necessary, and you should refer to source-specific documentation. But typically, the commonly seen processing levels are:&lt;br /&gt;
&lt;br /&gt;
'''Level 0''': Unprocessed data, more or less as downlinked to the ground station. Generally not sold or publicly released.&lt;br /&gt;
&lt;br /&gt;
'''Level 1''': Basic data in sensor units, for example TOA radiance. Often has a letter suffix with source-specific meaning, e.g., T to indicate a terrain-corrected version.&lt;br /&gt;
&lt;br /&gt;
'''Level 2''': Derived data in geophysical units, for example surface reflectance. Has been through high-level processing (e.g., atmospheric correction) that contains estimation or modeling.&lt;br /&gt;
&lt;br /&gt;
As a rule of thumb, use level 1 if the imagery itself is the focus and you want to analyze the data in a custom way; use level 2 if you just want something that works out of the box to achieve some further goal. But again, the practical meaning of the levels depends on the dataset, so check to make sure you’re getting what you want.&lt;br /&gt;
&lt;br /&gt;
=== Formats and projections ===&lt;br /&gt;
&lt;br /&gt;
Image data generally comes in formats optimized for large payloads and good metadata. These include NetCDF, NITF, JPEG2000, and GeoTIFF. GDAL, which is included in QGIS, can read virtually any reasonable format. If you have a choice, GeoTIFF is usually the best.&lt;br /&gt;
&lt;br /&gt;
A geographic projection is basically an invertible function (a reversible, one-to-one relationship) from a sphere to a plane. (If you know enough about geodesy to be saying “the spheroid, actually” right now, go read something more appropriate to your level of expertise 😉.) In other words, for a longitude and a latitude on Earth, a given projection gives you a corresponding x and y that you use to store and display the data.&lt;br /&gt;
&lt;br /&gt;
Typical georeferencing metadata says: (1) here is the projection of the data in this file, and (2) here is where the data in this file lies on the abstract 2D plane defined by that projection.&lt;br /&gt;
&lt;br /&gt;
You may also encounter data that is not projected in any strictly defined way. This might be as simple as a photo taken with a phone out a plane window. In theory you could define a projection for it if you knew parameters like the 3D GPS location of the phone, the angle it was pointed at, its camera’s field of view, and the small distortions introduced by its lens. But in practice it’s usually easier to find known points in the image and “tie down” or georeference the image based on those points. Given at least 3 but ideally more known points, you(r software) can warp the image into some standard projection. It’s deriving an arbitrary projection from pixel space to geographical coordinates by running a regression on the pixel-to-location pairs you provide. These known points are called ground control points, or GCPs. Some data, like Sentinel-1 SAR, is provided unprojected but with GCPs. This leaves more work for the user, but also more flexibility if you want to adjust the GCPs.&lt;br /&gt;
&lt;br /&gt;
There several standard ways to represent projections, notably WKT, proj, and EPSG codes. We’ll give EPSG codes here.&lt;br /&gt;
&lt;br /&gt;
Probably the most common projection you will see for raw data is Universal Transverse Mercator, or UTM. It’s actually a family of projections with the same formula but different parameters, each adapted to a different meridional slice of Earth’s surface. These UTM zones are named with numbers and north/south hemispheres: Paris is in UTM zone 31N, Geneva is in 32N, and Sydney is in 56S. (If you’ve used the MGRS grid system, this should sound familiar, but it’s not identical.) Within a zone, UTM is very close to equal-area and conformal, which are the most important properties for a projection if you want to do analytical work. Equal-area means 1 km² is the same number of pixels at any point in the projection, and conformal means that 1 km is the same number of pixels in every direction from any given point within the projection. (On a non-conformal map, circles appear as ovals, squares are rectangles, etc. This is a massive pain in the ass.) UTM is EPSG:32XYY, where X is 6 for N and 7 for S, and YY is the zone number, so for example 13S is EPSG:32713.&lt;br /&gt;
&lt;br /&gt;
For display on standard web maps, people often use web Mercator, a.k.a. spherical Mercator, which is not equal-area at the global scale, but is conformal. This is why web maps make Greenland far too big, but it remains approximately the right shape. For local analysis, web Mercator is fine (equivalent to UTM, actually), and can be a decent choice if you understand the issues with scale across large areas. EPSG:3857.&lt;br /&gt;
&lt;br /&gt;
The other projection you’ll see the most is equirectangular or plate carrée, which uses longitude and latitude directly as x and y coordinates on a plane. It is neither equal-area nor conformal, and basically only exists because the math is easy. It’s often used by people who should know better. Its non-conformality means that any time you’re working near the poles, everything is squeezed, and you’re either overzooming one dimension, losing data in the other, or both. If you just want to scatterplot some points as quickly as possible, equirectangular is fine, but avoid it when doing anything with imagery. EPSG:4326. (Note that this is the EPSG of WGS84, the geodetic standard that defines things like the prime meridian. Many, many other projections refer to WGS84 in their definitions. But using WGS84 as a projection itself, instead of as an ingredient in a projection, is the equirectangular projection.)&lt;br /&gt;
&lt;br /&gt;
The details of projections are notoriously tricky; it’s hard to work with them in a strictly correct and optimal way at all times. It’s the kind of topic that attracts pedantry and flamewars, unfortunately. Here’s some advice, none of it ironclad:&lt;br /&gt;
&lt;br /&gt;
# Most imagery data, if it’s projected at all, is already in an appropriate projection as it arrives from the data provider. If you can, leave it as-is. Every reprojection involves resampling the data, which generally loses information.&lt;br /&gt;
&lt;br /&gt;
# You should rarely have to explicitly think about projections. The whole point of a projection is to let you think in terms of pixels and/or meters, and if that’s not happening, something is wrong. Make sure you’re taking full advantage of your tools’ ability to handle these things automatically. Fighting your projection means something is wrong.&lt;br /&gt;
&lt;br /&gt;
# If you’re working on a multi-source project, choose a suitable projection at the start and project all data into it ''once'', when you import it.&lt;br /&gt;
&lt;br /&gt;
# Most pain around projections comes from accidentally mixing projections. Don’t do that.&lt;br /&gt;
&lt;br /&gt;
# The local UTM is usually a reasonable choice.&lt;br /&gt;
&lt;br /&gt;
=== Bundles ===&lt;br /&gt;
&lt;br /&gt;
Imagery is most often supplied in bundles, which are directories with image data files, usually separated by band or polarization (at least at level 1), and text (XML, json, etc.) metadata files. Some analysis tools will have plugins that will open specific types of bundles as single objects, automatically applying calibration data found in the metadata and so forth. In other situations you might open the image file and have to parse the metadata with your own code or by hand. If you’re getting to know a new imagery source, going through and understanding the purpose of everything delivered in a bundle is a great way to start.&lt;br /&gt;
&lt;br /&gt;
=== DN and PN ===&lt;br /&gt;
&lt;br /&gt;
Image formats generally store integers, since they losslessly compress better and are often easier to work with than floating point numbers. However, this presents a problem if, for example, the units being represented are reflectance, which ranges from 0 to 1. If we round every reflectance value to either 0 or 1, we’re delivering 1-bit data that’s probably close to totally useless. To address this, we might scale up to, say, 0 through 100 and say that instead of recording reflectance fraction, we’re recording reflectance percentage – fraction × 100. That still leaves us with less than 7 bits of radiometric resolution, though. Really, we’d like to be able to scale our values into an arbitrary range, maybe 0 through 65,535 to make full use of a 16-bit image, and send it with some metadata that tells how to get it back into some absolute or physically meaningful unit. You could even change the scaling factor per scene to optimize for bright v. dark, for example. And this is what providers generally do. The values actually stored in the image format are called digital numbers, or DN, and the values after scaling (typically with a multiplicative and an additive coefficient) are physical numbers, or PN.&lt;br /&gt;
&lt;br /&gt;
Not all providers do this. For example, Sentinel-2 level 1C data has a globally constant scaling factor, which means different bands have a defined relationship even if you read raw, unscaled pixels out of them, which is great. However, it’s the most common approach. Basically, don’t assume that pixels actually mean anything with an absolute definition, especially compared to pixels from another band or scene, unless you know that they’re PN.&lt;br /&gt;
&lt;br /&gt;
For most OSINT-relevant analysis, working in DN is a [https://en.wikipedia.org/wiki/Venial_sin venial sin] at worst and often justifiable. But it is useful to know what it means and to recognize situations where you should convert to PN. Any tool designed to work with remote sensing data will at least have some affordance for DN to PN scaling, and, again, may be able to parse the parameters out of a bundle (or in-image-file metadata) and apply them transparently so you never have to think about it.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2202</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2202"/>
		<updated>2022-07-20T21:58:56Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Adding SAR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
=== Synthetic aperture radar ===&lt;br /&gt;
&lt;br /&gt;
Synthetic aperture radar, or SAR, creates images with radio waves in wavelengths around 1 cm to 1 m.&lt;br /&gt;
&lt;br /&gt;
As a very first approximation, a SAR image is comparable to an optical image that shows objects that reflect radio waves instead of those that reflect visible light.&lt;br /&gt;
&lt;br /&gt;
==== SAR is not just a regular camera but for radar instead of light ====&lt;br /&gt;
&lt;br /&gt;
Beyond the fact that both modalities create images, SAR works on completely different principles from standard optical imaging, and understanding it requires understanding those principles.&lt;br /&gt;
&lt;br /&gt;
This page will only lightly outline how SAR works; for the math, please refer to SERVIR’s [https://servirglobal.net/Global/Articles/Article/2674/sar-handbook-comprehensive-methodologies-for-forest-monitoring-and-biomass-estimation SAR Handbook] (forest-oriented but with solid fundamentals), the NOAA/NESDIS [https://www.sarusersmanual.com/ Synthetic Aperture Radar Marine User’s Manual], or another good text. Here we will only point out some key ideas. If you want to get full value out of SAR, you should expect to invest at least a few hours in learning how it actually works. There’s a reason SAR experts tend to be a bit snobbish about it: it’s complex, subtle, and highly rewarding.&lt;br /&gt;
&lt;br /&gt;
==== SAR is active ====&lt;br /&gt;
&lt;br /&gt;
Optical sensors are almost all passive: they use energy that objects are already reflecting (usually from the sun) or producing (for example, in the thermal infrared). In contrast, SAR is active: it sends out a pulse of radar energy, roughly analogous to the flash on a camera.&lt;br /&gt;
&lt;br /&gt;
==== Sar resolves space with time, not with focus ====&lt;br /&gt;
&lt;br /&gt;
SAR’s resolution is based on the timing of returning signals. It does not pass the energy it senses through a focusing lens or mirror the way an optical sensor does. This leads to properties that are highly unintuitive if you think of it as merely “optical but in a difference frequency” – for example, it does not loose resolution with distance; there is no exact equivalent of perspective.&lt;br /&gt;
&lt;br /&gt;
==== Speckle ====&lt;br /&gt;
&lt;br /&gt;
Like a laser beam, a SAR signal interferes with itself. At a given moment and a given point, its waves may be canceling out or adding up. This means that a SAR image is intrinsically grainy or stippled-looking. This is not the same as sensor noise, because the effect is physically real and not a problem of errors in measurement. It can be mitigated by downsampling, averaging images from different “pings”, or applying despeckling filters. (A simple local median works reasonably well, but there’s a range of sophistication all the way up to sensor-specific filters based on physical models, extra inputs, fancy machine learning, etc.)&lt;br /&gt;
&lt;br /&gt;
==== Retroreflection and multiple reflection ====&lt;br /&gt;
&lt;br /&gt;
One consequence of SAR being active sensing is that it sees very bright returns from concave right angles made out of metal, which act as [https://en.wikipedia.org/wiki/Corner_reflector corner reflectors]. (Notice how road signs and markers seem to glow disproportionately in headlights – it’s because those are [https://en.wikipedia.org/wiki/Retroreflector retroreflectors] in the optical range.) Highly developed cities, for example, are very retroreflective to radar. This shows up especially where the angle of the sensor’s view aligns to a street grid, when it’s called the cardinal effect. (See, for example, [https://www.mdpi.com/2072-4292/12/7/1187/htm this academic paper], where they propose using retroreflection specifically to classify urban landcover. In general, there are very few radio-frequency corner reflectors in nature, and retroreflection is a good sign that you’re looking at a building, vehicle, etc.)&lt;br /&gt;
&lt;br /&gt;
Where the reflection is separated enough from the first reflecting surface that you can see both independently, we use the term multiple reflection (or mirroring or ghosting). This most often happens where tall buildings or bridges are next to or over water. A radio wave may hit the water, then a bridge, then return to the sensor; another may hit a bridge, then the water, and return to the sensor, and so on, and you’ll see images of multiple bridges.&lt;br /&gt;
&lt;br /&gt;
==== Layover and shadowing ====&lt;br /&gt;
&lt;br /&gt;
Layover (a.k.a. relief displacement) is an effect that makes objects at higher elevations appear closer to the sensor. This happens because the radio waves from the top of a vertical object arrive back at the sensor (which is above and to the side of the object) before the radio waves from its base. This is most obvious with truly vertical objects like radio towers and skyscrapers, but surfaces that have any vertical component (hills, for example) will show some degree of layover. Ultimately, layover comes from the difference between slant range, which is what the sensor actually measures – distance from the sensor – and ground range, which is what we tend to intuitively want or expect when we look at a map-like image.&lt;br /&gt;
&lt;br /&gt;
The painfully counterintuitive aspect, if you’re looking at a SAR image as if it were an ordinary optical image, is that layover goes in the opposite direction – buildings, for example, lean toward the sensor. For example, if you take a normal photo of a tall building from the south, it will cover the ground to its north. This feels normal because cameras, telescopes, etc., work on the same basic principle as the eye. But if you collect a SAR image of the same building from the south, it will cover the ground to its south. (Also, it won’t actually mask that ground, it will just add its signal in.)&lt;br /&gt;
&lt;br /&gt;
Shadowing is the lack of data returned from surfaces facing away from the sensor. The shadowed side of terrain is stretched out as part of layover.&lt;br /&gt;
&lt;br /&gt;
SAR imagery can be terrain corrected. Basically, this is a process that uses (1) the satellite’s position and the characteristics of its instrument and (2) a DEM or other model of the terrain it was looking at, and uses these to warp the SAR imagery into map coordinates and account for shadowing. Whether this is worthwhile will depend on the quality of the terrain correction algorithm and the data you can give it, and on what you need to analyze.&lt;br /&gt;
&lt;br /&gt;
In general, be cautious with terrain correction, because it can never fully correct for all effects (e.g., BDRF of different landcovers), and it can magnify small problems in input data. Sometimes it’s better to have a strange-looking image that you know how to interpret than a “normalized” one with subtle errors.&lt;br /&gt;
&lt;br /&gt;
==== Clouds and many other materials are generally transparent to SAR ====&lt;br /&gt;
&lt;br /&gt;
SAR frequencies are typically chosen to cut through weather. While this is a massive advantage of SAR over optical (the average place on Earth is cloudy roughly half the time), it’s also not absolute. Heavy rain, for example, can show up as ghostly features in some bands, so be on the lookout for it. If you see something you can’t interpret that might be weather-related, check the weather for the place at the time of image acquisition!&lt;br /&gt;
&lt;br /&gt;
More generally – beyond the specific case of water vapor in air – SAR interacts with materials differently than light does. For example, it reflects more off liquid water, so you can’t see into shallows with SAR the way you can with optical. On the other hand, it interacts less with certain very dry materials, so it can cut through loose sand, dead vegetation, and so on. (For example, SAR is used to map ancient river systems under the Sahara [https://www.mdpi.com/2073-4441/9/3/194/htm because it can image bedrock under loose, dry sand].) The details of SAR signal interaction depend on wavelength, angle, and other factors; if you’re doing more than casual interpretation of data from a given sensor, it’s a good idea to look it up and familiarize yourself.&lt;br /&gt;
&lt;br /&gt;
==== Polarimietry and interferometry ====&lt;br /&gt;
&lt;br /&gt;
Thus far we have only considered backscatter images: maps of the intensity of reflected radio energy. But a good deal of SAR’s value is beyond this kind of data. As well as recording how much energy is in reflected radio waves, SAR sensors characterize the radio waves themselves.&lt;br /&gt;
&lt;br /&gt;
Let’s use Sentinel-1 as an example for polarimetry. S1 sends radio waves in the vertical polarization, abbreviated V, and records them in both vertical and horizontal, or H, polarizations. In practice, this means that when you download an S1 frame in the usual way, you see two images, labeled VV (where the sensor transmitted V and measured V) and VH (where it transmitted V and measured H). The ratio of the two bands therefore tells you (in a general, statistical way, within the constraints of speckling) how much the surface at a given pixel tends to return a radio signal at that frequency and angle in the same polarization.&lt;br /&gt;
&lt;br /&gt;
Why do we care? Because direct reflection and corner reflectors tend to return waves at the same polarization (for Sentinel-1, always VV), while volumes that scatter waves return proportionally more cross-polarized (VH) waves. The second category is mainly vegetation and soil, while the first is corner reflectors, metal, and so on – proportionally more artificial surfaces. You can literally get a PhD in the nuances of SAR polarimtery, but at the most basic level, it tells you something about surface properties that no other sensor would.&lt;br /&gt;
&lt;br /&gt;
Interferometry with SAR, or inSAR, compares wave phase between observations. The phase of a wave is where it is in its cycle when received. Using sound as an example, measuring the phase of a sound at a given moment means not just its volume and pitch but that the sound wave is, say, 23% of the way into its high pressure half, or exactly at the lowest-pressure point.&lt;br /&gt;
&lt;br /&gt;
Suppose we make a SAR image of an area and record not only the amplitude but also the phase of the signal at every pixel. Now, after some time, the satellite’s orbit repeats, and at exactly the same moment in this new orbit (and therefore at exactly the same point in space relative to Earth), we take the same image again. There’s been some change over time that might represent, say, the soil drying out, a road being built, or a tree falling over. But the change in phase over relatively large, coherent regions can be interpreted as the surface getting nearer or farther away by (potentially) very small fractions of a wavelength – on the order of cm. This is an idealized version of inSAR.&lt;br /&gt;
&lt;br /&gt;
Geologists use this to [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093043 map earthquakes], but you can also use it for drought (because dry land sags), [https://site.tre-altamira.com/long-term-satellite-study-over-the-london-basin/ tunneling], [https://www.researchgate.net/figure/InSAR-measured-subsidence-rates-on-the-Mosul-dam-Iraq-Negative-values-indicate-motion_fig1_311451580 dam] and [https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-1/Satellites_confirm_sinking_of_San_Francisco_tower building] subsidence, [https://www.nature.com/articles/s41598-020-74957-2 underground explosion monitoring], and so on – in theory, anything that changes the distance between the satellite and the surface. You can even use decoherence (the breakdown of continuity between observations, which makes inSAR hard) for [https://www.academia.edu/44939771/Damage_detection_using_SAR_coherence_statistical_analysis_application_to_Beirut_Lebanon damage detection].&lt;br /&gt;
&lt;br /&gt;
When inSAR works, it’s like magic. You can pick up extremely subtle effects over large areas. It does have limits, like that you can only measure displacement towards or away from the satellite(s), which for SAR is always at least somewhat to the side, which is not necessarily in the direction you actually care about (say, up/down). And as you would expect, it tends to require a lot of very good data (because, for example, satellite orbits are never absolutely perfect repeats), expertise, and minutes to days of fine-tuning.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2201</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2201"/>
		<updated>2022-07-20T21:56:01Z</updated>

		<summary type="html">&lt;p&gt;Vruba: /* Georeferencing and orthorectification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquire imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2200</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2200"/>
		<updated>2022-07-20T21:55:16Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Rounding out discussion of optical&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;br /&gt;
&lt;br /&gt;
==== Georeferencing and orthorectification ====&lt;br /&gt;
&lt;br /&gt;
''Much of this applies outside optical as well – move?''&lt;br /&gt;
&lt;br /&gt;
A raw satellite image of land is an angled view of a rough surface. (Even nominally nadir-pointing satellites acquires imagery that is off-nadir toward its edges.) If you imagine riding on a satellite and looking off to, say, the west, you will see the eastern sides of hills and buildings at flatter angles than you see the western sides – if you can see them at all. To turn a raw image into something that is projected orthographically, like a map, you have to use a terrain model – a 3D map of the planet’s surface. Then you can use information about where the satellite was and the angle its sensor was pointing, and for each pixel in the output image, you can project it out to see at what latitude and longitude it must have intersected the ground. Then you move all the pixels to their coordinates in some convenient projection, and you’ve essentially taken the image out of perspective and made it orthographic.&lt;br /&gt;
&lt;br /&gt;
Except:&lt;br /&gt;
&lt;br /&gt;
* Earth’s surface is rough at every scale, and even “porous” or multiply defined in the sense that there are features like leafless trees that make it hard to define where the optical surface actually ''is'' at any given scale.&lt;br /&gt;
* There is no perfectly [https://en.wikipedia.org/wiki/Accuracy_and_precision accurate, precice], global, completely up-to-date terrain model of the Earth, let alone at a reasonable price. SRTM is pretty good but it’s only about 30 m, stops short of the arctic, and is 20+ years out of date: there are entire lakes, highway cuts, and reclaimed islands that don’t exist in it.&lt;br /&gt;
* Satellites typically only know where they’re pointing to within the equivalent of about 10 pixels (which, to be fair, is usually an extremely small fraction of a degree), so the pointing data can only narrow things down, not actually tell you where you are.&lt;br /&gt;
* Continental drift means that a continent can move by easily 1 px over the lifetime of a high-end commercial satellite; a major earthquake can discontinuously distort a small region by several m.&lt;br /&gt;
* To properly pin down an image (i.e., to check the reported pointing angle), you need to know the exact 3D location of 3 visible points within it, and realistically more like 10.&lt;br /&gt;
* All these errors can combine.&lt;br /&gt;
* No matter what, you can’t recover occluded features, i.e. things you can’t see in the original data. If you want a high-res satellite image of something like a canyon, you realistically need half a dozen images at very specific angles, which is extremely hard.&lt;br /&gt;
&lt;br /&gt;
We could go on! Georeferencing and orthorectification is a difficult problem. It’s easier for lower-resolution satellites, because a given angular error comes out to fewer pixels. Also, survey-mode satellites like Landsat and Sentinel-2, which are nadir-pointing anyway, put a lot of effort into doing this well. Two Landsat scenes will almost always coregister to well within a pixel. Sentinel-2 is a little less reliable, especially toward the poles. Commercial imagery is often displaced by far more than you would think. One way to see this is to step back in Google Earth Pro’s history tool, especially somewhere relatively remote and rugged.&lt;br /&gt;
&lt;br /&gt;
Here’s a farm in Nepal: 28.553, 84.2415. Just step back in time and watch it jump around underneath the pin. If you really want to be scared, watch the cliff to its north. This is why imagery analysts who understand imagery pipelines rarely use a whole lot of significant digits in their coordinates! You don’t really know where anything on Earth is, in absolute terms, to within more than a few meters at best if all you have to go on is a satellite image.&lt;br /&gt;
&lt;br /&gt;
==== Atmospheric correction ====&lt;br /&gt;
&lt;br /&gt;
Over long distances, even in clear weather, the atmosphere scatters and absorbs light. This is why distant hills are low-contrast and blueish (blue light is scattered more). What a satellite actually measures is called top-of-atmosphere radiance, or TOA. This is a measurement of nothing more than the amount of energy received per second, per pixel, per band. It can be measured pretty objectively. However, it’s often not what you want. For one thing, it’s too blue. For another, the amount of blueness and related effects will vary semi-randomly with atmospheric conditions (humidity, maybe dust storms or wildfire smoke, etc.) and predictably with season.&lt;br /&gt;
&lt;br /&gt;
Therefore, a reasonable desire is to basically normalize the sun and remove the effects of the atmosphere. What we’re trying to model here is called surface reflectance (SR). The main issue is that we don’t know the true state of the atmosphere at the moment the image was acquired. The best we can do is to model it and subtract it out. This is one of ''the'' problems in remote sensing, and you could earn a PhD by improving [https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes#Table_of_models one of the major models] by a few percent.&lt;br /&gt;
&lt;br /&gt;
The good news is there’s a brutally simple method that works pretty well most of the time. Dark object subtraction means assuming that the darkest pixel in the image should be pure black. Therefore, if you subtract out however much blue (and green, and so on) signal is present in the darkest pixel, you will have canceled out all the haze. It’s annoying how well this works considering how basic it is. It’s roughly equivalent to the auto-adjust tool in an image editor like Photoshop, or, to be a little more exact, a little like using the eyedropper in the Levels tool to set the black point to the darkest pixel.&lt;br /&gt;
&lt;br /&gt;
Correction to reflectance may or may not attempt to correct for terrain effects (i.e., relighting the scene). Different pipelines have different conventions for how far to correct or what to call different kinds of correction.&lt;br /&gt;
&lt;br /&gt;
Atmospheric correction is usually not key for OSINT purposes, but any time you find yourself taking exact measurements of pixel values, you should at least know whether you’re working in TOA or in SR, and if SR, you should have a sense of what the pipeline was.&lt;br /&gt;
&lt;br /&gt;
==== Common optical sensor types ====&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;br /&gt;
&lt;br /&gt;
# Pushbroom&lt;br /&gt;
# Whiskbroom&lt;br /&gt;
# Full-frame&lt;br /&gt;
&lt;br /&gt;
=== Thermal ===&lt;br /&gt;
&lt;br /&gt;
''This section is a stub. Please start it!''&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2199</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2199"/>
		<updated>2022-07-20T21:49:32Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Sensor modalities (part 1)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;br /&gt;
&lt;br /&gt;
== Modalities ==&lt;br /&gt;
&lt;br /&gt;
A sensor’s modality is the form of energy it senses and the general principles it uses to construct useful data. For example, microphones are sensors whose modality is measuring air pressure to record sound, barometers are sensors whose modality is using air pressure to record weather-scale atmospheric events, and everyday cameras are sensors whose modality is measuring visible light to record focused images.&lt;br /&gt;
&lt;br /&gt;
=== Optical ===&lt;br /&gt;
&lt;br /&gt;
Here we’ll define the optical domain as anything transmitted by Earth’s atmosphere in [https://en.wikipedia.org/wiki/Atmospheric_window#/media/File:Atmospheric_Transmission.svg the windows] between about 300 nm and 3 μm. This includes near ultraviolet (here, “near” means “near visible”, not “almost”), visible, near infrared, and shortwave infrared light, but not thermal infrared. You might also see this range described as, for example, VNIR + SWIR – visible, near infrared, and shortwave infrared. We’ll use Landsat as an example again, since its OLI sensor (on Landsat 8 and 9) is well-known and fairly typical of rich multispectral sensors. Its bands are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ OLI and OLI2 bands&amp;lt;ref&amp;gt;https://landsat.gsfc.nasa.gov/satellites/landsat-8/spacecraft-instruments/operational-land-imager/spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Name !! Wavelength range in nm (FWHM) !! Primary uses !! Visible to human eyes&lt;br /&gt;
|-&lt;br /&gt;
| Coastal/aerosol || 435 to 451 || Deep blue-violet. Water is very transparent in this band, so it can see into shallows. Also picks up Raleigh scattering from aerosols, helping model atmospheric effects and distinguish clouds v. dust v. smoke. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Blue || 452 to 512 || For true color. Useful for water. Better SNR than the coastal/aerosol band. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Green || 533 to 590 || For true color. Chlorophyll (land vegetation, plankton, etc.). Around the peak illumination of the sun. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Red || 636 to 673 || For true color. Absorbed well by chlorophyll. Shows soil. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| NIR (near infrared) || 851 to 879 || Reflected extremely well by chlorophyll and healthy leaf structures. Often the brightest band. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR1 (shortwave infrared 1) || 1,567 to 1,651 || Cuts through thin clouds well. Reflectivity correlates with dust/snow grain size – informative about surface texture. Note that this range in nm is 1.567 to 1.661 μm. || No&lt;br /&gt;
|-&lt;br /&gt;
| SWIR2 (shortwave infrared 2) || 2,107 to 2,294 || Similar to SWIR1; some surfaces are easily distinguished by their differences in SWIR1 v. SWIR2. Flame/embers and lava glow strongly here. || No&lt;br /&gt;
|-&lt;br /&gt;
| Pan (panchromatic) || 503 to 676 || Twice the linear resolution of all the other bands, since its wide bandwidth can integrate more photons at a given noise level. Used for pansharpening. This and the next are given out of spectral order. || Yes&lt;br /&gt;
|-&lt;br /&gt;
| Cirrus || 1,363 to 1,384 || Deliberately ''not'' in an atmospheric window – almost entirely absorbed by water vapor in the lower atmosphere, but strongly reflected by high clouds. Allows for better atmospheric correction by spotting thin clouds. || No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Band names are semi-standard in the sense that, for example, green will always means some version of visible green. However, exact bandpasses can vary quite a bit between sensors. Intercomparing bands from different sensors on the assumption that they must match will often lead to problems – check the actual numbers, not the names.&lt;br /&gt;
&lt;br /&gt;
Bands can be processed and combined in many, many useful ways. For example, you can run statistics like principal component analysis on a set of bands to find correlations and outliers. You can use band ratios like [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI], [https://en.wikipedia.org/wiki/Normalized_difference_water_index NDWI], or [https://www.earthdatascience.org/courses/earth-analytics/multispectral-remote-sensing-modis/normalized-burn-index-dNBR/ NBR], which index properties like vegetation health, surface moisture, and burn scars. You can treat multispectral values as vectors to be clustered, compared, or decomposed. You can derive [https://www.mdpi.com/2072-4292/12/4/637/htm a “contra-band”] by subtracting some bands out of another band that covers them.&lt;br /&gt;
&lt;br /&gt;
You almost always learn more by comparing bands than from one band alone. Features that are unremarkable in a single grayscale image can become meaningful if you notice that they don’t fit the usual relationship between that band and some other band(s).&lt;br /&gt;
&lt;br /&gt;
==== True and false color ====&lt;br /&gt;
&lt;br /&gt;
True color imagery puts red, green, and blue sensed bands in the red, green, and blue bands of the output image. It looks more or less like it would to an astronaut with binoculars. What’s called true color is often not quite, because the sensor bands don’t correspond exactly to the primaries used in standards like sRGB, but the difference is rarely important.&lt;br /&gt;
&lt;br /&gt;
Humans have 30 million years of evolutionary hard-wiring and several decades of individual practice in interpreting true color images, and therefore you should favor true color whenever reasonably possible.&lt;br /&gt;
&lt;br /&gt;
However, often false color is the way to go. This means putting anything but red, green, and blue bands (in that order) in the channels of the image you’re looking at. You might not even use bands directly at all; you might derive indexes or other more processed pseudo-bands. You could pull in data from another modality. Most often, however, people simply choose the bands that are most useful to them and put them in the visible channels in spectral order (i.e., the longest wavelength goes in the red channel and the shortest in blue). For any widely used sensor, a web search should give you a selection of “zoos” demonstrating popular band combinations – for example, [https://www.researchgate.net/figure/Combinations-of-Landsat-8-bands-QGIS-Images_fig6_291969860 here’s one for Landsat 8/9], but you can find dozens of others.&lt;br /&gt;
&lt;br /&gt;
Band combinations are usually given by sensor-specific band numbers: 987 or 9-8-7 means band 9 is in the red channel and so on. (Annoyingly, this means that, e.g., Landsat 8/9 combination 543 and Sentinel-2 combination 843 are basically the same thing despite having different numbers.)&lt;br /&gt;
&lt;br /&gt;
==== Pansharpening ====&lt;br /&gt;
&lt;br /&gt;
Many sensors, including virtually all current-generation commercial data at about 1 m or sharper spatial resolution, have a spatially sharp but spectrally coarse panchromatic (pan) band and a set of spatially coarser but spectrally sharper multispectral bands. The nominal spatial resolution of the sensor will be for the pan band alone, and the multispectral bands’ pixels will be (typically) some multiple of 2 larger on an edge. For example, Landsat 8 and 9 have 15 m pan bands and 30 m multispectral bands (2×, linearly). The Pléiades and WorldView constellations have roughly 50 cm pan bands and 2 m multispectral bands (4×). SkySat, unusually, produces imagery (with some preprocessing) at 57 cm pan, 75 cm multispectral (~1.3×).&lt;br /&gt;
&lt;br /&gt;
For visualization purposes, we combine panchromatic and visible data into a single image. As an intuitive model of this process, imagine overlaying a translucent, sharp black-and-white image (the pan band) onto a blurry color image (the RGB bands) of the same scene. You can actually do this quite literally and get a semi-acceptable result, or [https://earthobservatory.nasa.gov/blogs/earthmatters/2017/06/13/how-to-pan-sharpen-landsat-imagery/ work harder] to get a better result. “Real” automated pansharpening algorithms range from the very basic to the extremely sophisticated.&lt;br /&gt;
&lt;br /&gt;
The point to remember is that most satellite imagery with good spatial resolution is pansharpened, and this creates some artifacts. In particular, when you are zoomed all the way in to 100% (pixel-for-pixel screen resolution), you have actually overzoomed all the color or multispectral information. Any pansharpening algorithm can only estimate a likely distribution of color. It’s like superresolution with neural networks – it may be statistically likely to be correct, it may be perfect in some cases, it may help you interpret what’s there, but it is necessarily a process of inventing information. And that entails risks.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2198</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2198"/>
		<updated>2022-07-20T21:43:57Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Resolutions!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;br /&gt;
&lt;br /&gt;
== Resolutions ==&lt;br /&gt;
&lt;br /&gt;
Satellite instruments can be thought of as identifying features (a deliberately abstract term) in any of a number of dimensions. The dimension(s) we think of most often is spatial: x and y, or equivalently longitude and latitude or east and north, on Earth’s surface. But a sensor needs a nonzero amount of resolving power in the other dimensions as well in order to be useful.&lt;br /&gt;
&lt;br /&gt;
The idea of resolving power has formal definitions [https://en.wikipedia.org/wiki/Angular_resolution#The_Rayleigh_criterion in optics], for example, but here we will be informal and common-sensical about what it means to actually resolve something. In particular, resolution is usually defined in terms of points (in some dimension), but in the real world we only rarely care about points of any kind; we’re usually more interested in objects and patterns.&lt;br /&gt;
&lt;br /&gt;
As an example, imagine we’re looking for a bright white napkin left on a freshly paved asphalt runway. Even if our data is at a resolution of, say, 25 cm, and the napkin is only 10 cm across, we will probably be able to find the napkin because the pixels it overlaps will be noticeably brighter, assuming good radiometric resolution. In this case, we’ve beaten the nominal spatial resolution of the sensor – we haven’t technically ''resolved'' the napkin, but we’ve ''found'' it, which is what we wanted.&lt;br /&gt;
&lt;br /&gt;
On the other hand, imagine that there are F-16s on the runway, and we want to know whether they’re F-16As or F-16Cs. Unless we have outside information (about markings, say), it’s entirely possible that we can’t tell. The details we need simply aren’t clearly visible from above. Therefore, we cannot determine whether there are F-16As at this airfield – despite the fact that F-16As are much larger than the resolution of the sensor. This seems painfully obvious when spelled out, but people who should know better routinely make versions of this mistake when working on real questions.&lt;br /&gt;
&lt;br /&gt;
These two examples with spatial resolution illustrate that you can’t think of resolution (of any kind) as simply the ability to see a thing of a given size. Sometimes you’ll have better data than you’d think from looking at the number alone and sometimes you’ll have worse. Be skeptical of blanket statements that you definitely can or can’t see ''x'' at resolution ''y''. Often, it’s really a situation where you can see some % of ''x''s at resolution ''y'' under conditions ''z'', and it’s just a question of whether trying is worth the time.&lt;br /&gt;
&lt;br /&gt;
Resolutions are in a multi-way tradeoff in sensor design. As one of several important factors, increasing each kind of resolution multiplies data volumes, and getting data from a satellite to the ground is expensive and sometimes physically limited. In a sense, you can’t get satellite data that does everything (is super sharp ''and'' hyperspectral ''and'' …) for the same reason you can’t get a blender that’s also a toaster and a dishwasher. The laws of physics might not preclude it, but the constraints of sensible engineering absolutely do. What you see in practice are satellites that push for some kinds of resolution at the expense of others. Knowing how to mix and match to answer a particular question is a valuable skill.&lt;br /&gt;
&lt;br /&gt;
=== Spatial ===&lt;br /&gt;
&lt;br /&gt;
If someone says “this is a high-resolution sensor” we understand this by default to mean spatial resolution. This is also called ground sample distance (GSD) or ground resolved distance (GRD), and is the dimensions of the pixels of the data. (Theoretically, you could oversample your data and have pixels smaller than what’s actually resolvable, but that’s not an urgent consideration here.) We usually assume that the pixels are square or close enough, so you see this given as a single length dimension: 50 cm, 15 m, etc.&lt;br /&gt;
&lt;br /&gt;
There’s some sleight of hand with definitions here. If we think about standard optical instruments, which are basically telescopes with CCDs, they do not have an intrinsic ground sample distance. They have an intrinsic angular resolution – a fraction of the arc that each pixel covers. This only becomes a distance on Earth’s surface if we assume the sensor is pointed at Earth at a given distance and angle. The nominal resolutions of optical satellite instruments are given for the altitude of the satellite (which can change) and looking on nadir (straight down). That’s a best case. When looking to the side, at rough terrain, the pixels can cover larger areas, inconsistent areas from one part of the image to another, and areas that are not square. Some of these problems get better and others get worse after orthorectification (see below).&lt;br /&gt;
&lt;br /&gt;
This is why it pays to be very cautious about measuring things based purely on pixel-counting, especially in imagery that’s been through some proprietary or undocumented processing pipeline. It’s more reliable to (1) have a very clear sense of what scale distortions are likely present in the image, and (2) reference measurements to objects of safely assumed dimensions.&lt;br /&gt;
&lt;br /&gt;
An old-school IMINT way to measure what spatial resolution means in practice is [https://irp.fas.org/imint/niirs.htm the National Imagery Interpretability Rating Scale (NIIRS)].&lt;br /&gt;
&lt;br /&gt;
An often overlooked consideration on spatial resolution is that pixel area is the square of pixel side length, and it’s what matters most. (We’ll assume square pixels for this discussion.) If you consider a square meter of ground, you can envision it covered by exactly 1 pixel at 1 m GSD. At “twice” that GSD, 50 cm, it’s covered by 4 pixels – but 4 is not twice 1. At 25 cm GSD, which sounds like 4× the resolution, it’s covered by 16 pixels, which is far more than 4× as clear. Perceived sharpness, information in a technical sense, and (most importantly) the practical ability to interpret fine details goes up in proportion to pixel count, not as the inverse of pixel edge length. In other words, 10 m imagery is more than 3× as clear as 30 m imagery, all else being equal.&lt;br /&gt;
&lt;br /&gt;
=== Spectral ===&lt;br /&gt;
&lt;br /&gt;
Spectral resolution is the ability to distinguish different frequencies (wavelengths) of light or other energy. We often measure it as a number of bands, where bands are like the R, G, and B channels in everyday color imagery. Grayscale imagery has 1 band. RGB imagery has 3. RGB + near infrared (a common combination) has 4. Multispectral sensors on more advanced satellites often have about half a dozen to a dozen bands, typically covering the visible range and then parts of the near to moderate infrared spectrum.&lt;br /&gt;
&lt;br /&gt;
We often measure into the infrared (IR) for three main reasons:&lt;br /&gt;
&lt;br /&gt;
# Infrared light is scattered less than visible and especially blue light is by the atmosphere. This allows for more clarity and contrast – basically, better radiometric resolution (see below). Another way of saying this is that IR light cuts through haze.&lt;br /&gt;
&lt;br /&gt;
# Healthy plants strongly reflect near infrared (NIR) light. If we could see only slightly deeper shades of red, we’d see trees and grass glowing hot pink. This means infrared is useful for vegetation monitoring (for example, with [https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index NDVI]), which is useful for agriculture but also for anything that affects plants. You can use infrared to spot subtle tracks and traces on vegetation that might be invisible in ordinary imagery. (For example, you might be able to detect a road under a forest canopy by noting that a line of trees is thriving ''slightly'' less than last year.)&lt;br /&gt;
&lt;br /&gt;
# Things that are camouflaged in visible light, deliberately or not, are often easily distinguishable in infrared. Specifically, green paint tends to absorb IR (unlike plants) and stand out like a sore thumb. Since everyone knows this now, sophisticated actors no longer assume that you can hide a tank (for example) by painting it green, but you can still find things in infrared that you wouldn’t have in visible. You see more stuff when you have more frequencies available.&lt;br /&gt;
&lt;br /&gt;
For these reasons, and others as well, optical satellites have always been biased toward the IR side of the spectrum.&lt;br /&gt;
&lt;br /&gt;
Many optical sensors have one spatially sharp band with low spectral resolution, typically covering the visible range and some infrared, and multiple bands that are spectrally sharp but spatially coarse. These will be called the panchromatic or pan and (collectively) multispectral bands. They are merged for visualization in a process called pansharpening (see below). Sentinel-2, for example, does not have a pan band, but it collects different bands at different spatial resolutions roughly in proportion to their assumed importance – visible and NIR are 10 m, some other IR bands are 20 m, and then there are some “bonus” atmospheric bands at only 60 m.&lt;br /&gt;
&lt;br /&gt;
Sensors that focus specifically on spectral resolution (sometimes with hundreds of bands) are called hyperspectral.&lt;br /&gt;
&lt;br /&gt;
Here we’ve used optical and infrared wavelengths as examples, but the basic principles are similar for, e.g., radio frequency bands. In general, for any kind of observation, multiple spectral bands help resolve ambiguities in the scene and open up useful avenues for inter-band comparison.&lt;br /&gt;
&lt;br /&gt;
=== Temporal ===&lt;br /&gt;
&lt;br /&gt;
Temporal resolution is resolution in time. This is also called revisit time or cadence. As mentioned above, temporal resolution for medium-resolution open data survey-style satellites (Landsat 8 and 9, Sentinel-2A and 2B, Sentinel-1A, and others) is typically around two weeks per satellite or one week per constellation. For weather satellites (with very low spatial resolution) it can be as quick as 30 seconds in certain cases. PlanetScope and many low spatial resolution science satellites are approximately daily.&lt;br /&gt;
&lt;br /&gt;
High-res commercial satellite constellations are a special case, because, as we’ve seen, their collections are based on tasking. This means that if there’s some point that they never have a reason to collect, their actual revisit time might be infinite. If there’s a major geopolitical crisis and every possible image is taken, even from extreme angles, it might be more often than once a day. Realistically, over moderately populated areas of no special interest, it might be once or twice a year; in deserts, it might be multiple years.&lt;br /&gt;
&lt;br /&gt;
=== Radiometric ===&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution is often overlooked, but it’s especially interesting to OSINT. It’s essentially bit depth: the number of levels of light (or other energy) that the sensor can distinguish in a given band. Older or cheaper satellites might have a radiometric resolution of 8 or 10 bits; newer and better ones are typically 12 to 14.&lt;br /&gt;
&lt;br /&gt;
High bit depth opens up many possibilities – for example:&lt;br /&gt;
&lt;br /&gt;
* You can stretch contrast to account for obscurations like haze, thin clouds, and smoke.&lt;br /&gt;
* You can stretch contrast to find extremely faint traces on near-homogeneous backgrounds: wakes on water surfaces, paths on snowfields, offroading by light vehicles. Initial testing suggests Landsat 9 OLI (which has excellent radiometric resolution) can pick up the tracks of single trucks on the Sahara, despite the tracks being made out of sand on sand and much smaller than a single pixel of spatial resolution. It can also pick up bright city lights at night.&lt;br /&gt;
* Band math, such as calculating band ratios or distances in spectral angle, gets more stable and accurate.&lt;br /&gt;
&lt;br /&gt;
In OSINT we usually can’t afford a lot of highest spatial resolution imagery. However, the excellent radiometric resolution of a lot of free data (since it was designed for science) gives us a side route into seeing things that someone hoped would not be noticed.&lt;br /&gt;
&lt;br /&gt;
Radiometric resolution can be increased at the cost of spectral resolution by averaging bands. Under idealizing assumptions, the standard deviation of the noise of an image average is 1/sqrt(n), where n is the number of input images with unit standard deviation noise. (In practice, noise will be positively correlated between the bands of most sensors, so you’ll fall at least somewhat short.)&lt;br /&gt;
&lt;br /&gt;
Another way to look at radiometric resolution is to think about the total signal to noise ratio, or SNR, of the image. Some of the noise is what we usually mean by noise – semi-random grainy or streaky false signals inserted into the image by sensor flaws, cosmic rays, and so on. But some of it will be quantization noise, a.k.a. rounding errors or aliasing: output imprecision due to the inability to represent all possible values of real data. This latter kind of noise is the problem that increases as bit depth goes down. (This is analogous to the idea of talking about effective spatial resolution as a combination of the sampling resolution and the point spread function being sampled. But we’re getting off the main track here.)&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2197</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2197"/>
		<updated>2022-07-20T21:37:19Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Adding material on orbits and pointing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;br /&gt;
&lt;br /&gt;
== Orbits and pointing ==&lt;br /&gt;
&lt;br /&gt;
As an example of a typical optical Earth observation orbit, let’s take [https://en.wikipedia.org/wiki/Landsat_9 Landsat 9’s parameters from Wikipedia]:&lt;br /&gt;
&lt;br /&gt;
* '''Regime: Sun-synchronous orbit.''' This means the orbit is designed to always pass overhead at about the same local solar time. Put another way, any two Landsat 9 images of a given spot at a given time of year will have the same angle of sunlight on the surface, and the same angle between the surface and the sensor. Specifically, it always crosses the equator on its southbound half-orbit at 10:00 (and, therefore, on its northbound half-orbit at 22:00). This mid-morning window is the sweet spot for most optical imaging purposes. In most climates where cumulus clouds are common, they generally form around midday [https://www.researchgate.net/figure/A-schematic-overview-is-presented-of-the-atmospheric-boundary-layer-and-its-diurnal_fig1_335027457 as the mixed layer rises]. It’s also claimed that this is the heritage of cold war IMINT workers wanting shadows to estimate structure heights. (If you image around noon, you get places with vertical shadows in the tropics. This gives you depth perception problems, like you get walking though brush with a headlamp instead of a hand-held flashlight. Citation needed, though.) Virtually all commercial satellite imagery that you see on commercial maps has shadows that point west and away from the equator – in fact, as of 2022, this is so consistent that if you see a shadow pointing a different direction, it’s a good hint that the imagery is actually aerial (taken from a plane/UAV/balloon inside the atmosphere), not satellite.&lt;br /&gt;
&lt;br /&gt;
* '''Altitude: 705 km (438 mi).''' This is basically chosen to be as close to the surface as reasonably possible without grazing the atmosphere enough to perturb the orbit. It is substantially higher than the International Space Station, for example, but ISS has to constantly boost itself back up and that’s expensive. ([https://landsat.gsfc.nasa.gov/article/flying-high-landsat-8-sees-the-international-space-station/ ISS does occasionally underfly imaging satellites.]) For comparison, if Earth were the size of a 30 cm (12 inch) desktop globe, Landsat 9’s orbit would be at 17 mm (2/3 inch) – grazing your knuckles if you held the globe like a basketball. (Developing some intuition about this relative size can help understand the practicalities of things like off-nadir imaging.)&lt;br /&gt;
&lt;br /&gt;
* '''Inclination: 98.2°.''' This is the angle at which the satellite crosses the equator. It makes the orbit slightly retrograde, which is part of the equation for staying sun-synchronous. A consequence is that although orbits like this one are sometimes called polar in a loose sense, they never exactly cross the pole – Landsat 9 always misses the south pole on its left and the north pole on its right. This leaves two relatively small polar gaps that are never imaged.&lt;br /&gt;
&lt;br /&gt;
* '''Period: 99.0 minutes.''' This is the time it takes to do one full orbit. This is another variable constrained by the requirements of syn-synchrony and the lowest reasonable altitude.&lt;br /&gt;
&lt;br /&gt;
* '''Repeat interval: 16 days.''' Every 16 days, Landsat 9 is in exactly the same spot relative to Earth (± very small deflections due to space weather, micrometeorites, tides, maneuvers to avoid debris, etc.) and takes an image that can be exactly co-registered with the previous cycle’s. Furthermore, pairs (or mini-constellations) like Landsat 8 and 9 or Sentinel-2A and 2B are in identical orbits but half-phased such that, from a data user’s perspective, they act like a single satellite with half the repeat time. (Specifically, 8 days for Landsat 8/9 and 5 days for Sentinel-2A/B.) More or less by definition, constellations are designed to fill in each other’s gaps; for example, the wide-swath, low-resolution MODIS instruments are on a pair of satellites with near-daily coverage, but one mid-morning and the other mid-afternoon.&lt;br /&gt;
&lt;br /&gt;
We used Landsat 9 here because it’s familiar to most people in the industry and is [https://landsat.gsfc.nasa.gov/about/the-worldwide-reference-system/ well documented]. Other imaging satellites will have different sets of capabilities and constraints. For example, the Landsat series is on-nadir (looking straight down) more than 99% of the time. It only rolls to the side to look away from its ground track for exceptional events, e.g., major volcanic eruptions. But a high-res commercial satellite, e.g., in the Airbus Pléiades or Maxar WorldView constellations, is constantly looking off-nadir. One of these satellites might point its optics in easily half a dozen directions on a given orbit, and would only very rarely happen to look straight down.&lt;br /&gt;
&lt;br /&gt;
Commercial users typically want images that are on-nadir and settle for images less than about 30° off-nadir. Around that angle, atmospheric and terrain correction starts getting hard, tall things are seen from the side as well as from above and block whatever’s behind them (an effect called layover), and the practical utility of imagery falls off for most purposes. But the area within 30° of nadir is quite large: about 400 km or 250 mi wide, [https://en.wikipedia.org/wiki/Special_right_triangle#30%C2%B0%E2%80%9360%C2%B0%E2%80%9390%C2%B0_triangle according to some light trig].&lt;br /&gt;
&lt;br /&gt;
High-resolution commercial satellites schedule collections in a process called tasking (as in “Tokyo is tasked for tomorrow”). This is in contrast to the survey mode collection used by Landsat, Sentinel, etc., which are essentially always collecting when they’re over land.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2196</id>
		<title>Satellite image data concepts</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Satellite_image_data_concepts&amp;diff=2196"/>
		<updated>2022-07-20T21:33:18Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Initial commit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an organized list of ideas useful for understanding image data from satellites. It is intended for people with some background or practical knowledge who want to fill in the gaps. Since many concepts are intrinsically cross-cutting, they can’t be forced into a single perfectly hierarchical taxonomy; the goal is merely to keep related ideas reasonably near each other.&lt;br /&gt;
&lt;br /&gt;
We might divide up the kinds of knowledge it’s useful to have when working with satellite data like this:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Layers of abstraction in remote sensing knowledge&lt;br /&gt;
|-&lt;br /&gt;
! Practice !! This page !! Theory&lt;br /&gt;
|-&lt;br /&gt;
| Learning how to answer questions by actually using data in Photoshop, QGIS, numpy, etc. || Learning technical vocabulary and concepts that apply across sources || Learning rigorously defined principles based in physics, geostatistics, etc.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All of these kinds of knowledge are important to an OSINT practitioner. This page only covers [https://en.wikipedia.org/wiki/Middle-range_theory_(sociology) the middle range] – ideas that are more abstract than what you can learn from the pixels themselves, but less abstract than what you would get in a higher-level college course.&lt;br /&gt;
&lt;br /&gt;
Within those bounds, the organizational arc here is broadly from the more abstract (orbits) through the relatively concrete (how sensors work) to the practical (what a geotiff is).&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2195</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2195"/>
		<updated>2022-07-16T18:11:33Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Typo fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists some common sources and ways to access publicly-available [[Remote Sensing|remote sensing]] data.&lt;br /&gt;
&lt;br /&gt;
== Directories ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/nickrsan/awesome-public-datasets/ Awesome Public Datasets] from Nick Santos. Goes well beyond remote sensing, but there’s much of OSINT interest.&lt;br /&gt;
* [https://docs.google.com/spreadsheets/d/1oFY_TX5QRFyAAu-nxeClnOFB1epSlSDWEHoMalvv0Qs/edit#gid=0 Sources of Satellite Data].&lt;br /&gt;
* NOAA NCEI’s [https://www.ncei.noaa.gov/maps-and-geospatial-products Maps and Geospatial Products].&lt;br /&gt;
* [https://opendatainception.io/ Open Data Inception].&lt;br /&gt;
&lt;br /&gt;
''Feel free to take items from these sources and expand them below.''&lt;br /&gt;
&lt;br /&gt;
== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while Worldview has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3 OrbView-3 data].&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Hub==&lt;br /&gt;
[https://www.sentinel-hub.com/ Sentinel Hub] is a multi-source query and download tool owned and operated by Sinergise Laboratory for geographical information systems, Ltd.&lt;br /&gt;
&lt;br /&gt;
It allows access to the most recent data from the European Space Agency's Sentinel-1, Sentinel-2, Sentinel-3, Sentinel-5P satellites, as well as Landsat.&lt;br /&gt;
&lt;br /&gt;
[[File:Sentinel Hub EO - Comparison Tool.png|alt=Screenshot of Sentinel Hub EO Browser comparison slider tool, using Sentinel-1 SAR and Sentinel-2 optical imagery.|thumb|Screenshot of Sentinel Hub EO Browser comparison slider tool, using Sentinel-1 SAR and Sentinel-2 optical imagery.]]&lt;br /&gt;
&lt;br /&gt;
===Access requirements===&lt;br /&gt;
&lt;br /&gt;
Public; some features, such as measuring tool and the time lapse utility require a (free) [https://services.sentinel-hub.com/oauth/subscription?param_domain_id=2&amp;amp;param_redirect_uri=https://apps.sentinel-hub.com/eo-browser/oauthCallback.html&amp;amp;param_state=&amp;amp;param_scope=SH%20EOBrowser&amp;amp;param_client_id=1febe974-ca4f-44c1-9fc8-bafbd3bb4abd&amp;amp;domainId=2 Sentinel Hub account].&lt;br /&gt;
&lt;br /&gt;
===Highlights===&lt;br /&gt;
[https://apps.sentinel-hub.com/eo-browser/ Sentinel Hub EO browser] offers two capabilities which allow for quick image comparison - an excellent way of detecting changes: &lt;br /&gt;
&lt;br /&gt;
* A &amp;quot;Compare&amp;quot; slider tool which allows quick and easy comparison of images taken in different modes and/or by different satellites, e.g. comparing a Sentinel-1 SAR picture with Sentinel-2 optical imagery. &lt;br /&gt;
* With a free account, it also offers the ability to create animated time lapses, spanning days, weeks or even years. &lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
When viewing Sentinel-1 SAR imagery, the EO Browser defaults to '''HH - decibel gamma0''' mode. While useful in many contexts, I find that it is subject to more interference from atmospheric conditions, e.g. rain, water vapor etc. &lt;br /&gt;
&lt;br /&gt;
Selecting '''HV - linear gamma0''' instead greatly mitigates this noise and it is therefore my go to mode when trying to detect vessels at sea which otherwise would have been invisible using '''HH - decibel gamma0'''.&lt;br /&gt;
&lt;br /&gt;
==Worldview==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
===Example workflow===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
#Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
#Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
#Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
===Highlights===&lt;br /&gt;
&lt;br /&gt;
*“Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
*Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
===Limits===&lt;br /&gt;
&lt;br /&gt;
*Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
*Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
&lt;br /&gt;
*The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
==Vertex==&lt;br /&gt;
&lt;br /&gt;
[https://search.asf.alaska.edu/#/ Vertex] is a multi-source query, download, and limited analysis tool for SAR (synthetic aperture radar) data, operated by the University of Alaska and NASA. Most but not all significant public domain SAR datasets are available through it.&lt;br /&gt;
&lt;br /&gt;
===Access requirements===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
===Example workflow===&lt;br /&gt;
&lt;br /&gt;
#Log in.&lt;br /&gt;
#Draw a rectangle over your area of interest.&lt;br /&gt;
#Click the UPDATE button in the top bar (accepting the defaults). After the search runs, a list browser will pop up from the bottom.&lt;br /&gt;
#Click a SAR scene with a preview image in the left column of the list browser.&lt;br /&gt;
# Click the larger preview image that appears in the middle column (your cursor will change to a magnifying glass). Pan and zoom at will, then close the preview viewer.&lt;br /&gt;
# In the right column, click the download icon (the cloud with an arrow) next to L1 Detected High-Res Dual-Pol (GRD-HD).&lt;br /&gt;
&lt;br /&gt;
===Highlights===&lt;br /&gt;
&lt;br /&gt;
*Sentinel-1 data is the backbone of Vertex’s datasets: dual-polarization, lowish-resolution, land-oriented, survey-mode SAR data.&lt;br /&gt;
*SeaSat data! Coverage is [https://asf.alaska.edu/data-sets/sar-data-sets/seasat/seasat-swath-coverage-maps/ very sparse], but it’s of historical significance, and if you ever want to know what your AOI looked like in L-band SAR in 1978, good luck finding a better source.&lt;br /&gt;
*[https://docs.asf.alaska.edu/api/basics/ Good API] access.&lt;br /&gt;
&lt;br /&gt;
===Limits===&lt;br /&gt;
&lt;br /&gt;
*Direct download is fairly slow.&lt;br /&gt;
*Access to some datasets requires authorization (i.e., showing that it’s part of a credible research project and potentially agreeing to a no-redistribution license).&lt;br /&gt;
&lt;br /&gt;
===Tips===&lt;br /&gt;
&lt;br /&gt;
*If you’re on reasonably good broadband, the direct download option is not optimal. Instead, use the “shopping cart” feature. Add your items to the cart, then select “Metadata (metalink)” from the shopping cart’s Data Download menu. Use .metalink file it provides with a tool like [https://aria2.github.io/ aria2]. Downloading this way is far faster; it will saturate a 100 MiB/s connection.&lt;br /&gt;
[[Category:Radar]]&lt;br /&gt;
[[Category:SAR]]&lt;br /&gt;
[[Category:Guide]]&lt;br /&gt;
[[Category:List]]&lt;br /&gt;
[[Category:Remote Sensing]]&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2154</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2154"/>
		<updated>2022-07-16T04:47:22Z</updated>

		<summary type="html">&lt;p&gt;Vruba: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Directories ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/nickrsan/awesome-public-datasets/ Awesome Public Datasets] from Nick Santos. Goes well beyond remote sensing, but there’s much of OSINT interest.&lt;br /&gt;
* [https://docs.google.com/spreadsheets/d/1oFY_TX5QRFyAAu-nxeClnOFB1epSlSDWEHoMalvv0Qs/edit#gid=0 Sources of Satellite Data].&lt;br /&gt;
* NOAA NCEI’s [https://www.ncei.noaa.gov/maps-and-geospatial-products Maps and Geospatial Products].&lt;br /&gt;
* [https://opendatainception.io/ Open Data Inception].&lt;br /&gt;
&lt;br /&gt;
''Feel free to take items from these sources and expand them below.''&lt;br /&gt;
&lt;br /&gt;
== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3 OrbView-3 data].&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;br /&gt;
&lt;br /&gt;
[https://search.asf.alaska.edu/#/ Vertex] is a multi-source query, download, and limited analysis tool for SAR (synthetic aperture radar) data, operated by the University of Alaska and NASA. Most but not all significant public domain SAR datasets are available through it.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Log in.&lt;br /&gt;
# Draw a rectangle over your area of interest.&lt;br /&gt;
# Click the UPDATE button in the top bar (accepting the defaults). After the search runs, a list browser will pop up from the bottom.&lt;br /&gt;
# Click a SAR scene with a preview image in the left column of the list browser.&lt;br /&gt;
# Click the larger preview image that appears in the middle column (your cursor will change to a magnifying glass). Pan and zoom at will, then close the preview viewer.&lt;br /&gt;
# In the right column, click the download icon (the cloud with an arrow) next to L1 Detected High-Res Dual-Pol (GRD-HD).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* Sentinel-1 data is the backbone of Vertex’s datasets: dual-polarization, lowish-resolution, land-oriented, survey-mode SAR data.&lt;br /&gt;
* SeaSat data! Coverage is [https://asf.alaska.edu/data-sets/sar-data-sets/seasat/seasat-swath-coverage-maps/ very sparse], but it’s of historical significance, and if you ever want to know what your AOI looked like in L-band SAR in 1978, good luck finding a better source.&lt;br /&gt;
* [https://docs.asf.alaska.edu/api/basics/ Good API] access.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Direct download is fairly slow.&lt;br /&gt;
* Access to some datasets requires authorization (i.e., showing that it’s part of a credible research project and potentially agreeing to a no-redistribution license).&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* If you’re on reasonably good broadband, the direct download option is not optimal. Instead, use the “shopping cart” feature. Add your items to the cart, then select “Metadata (metalink)” from the shopping cart’s Data Download menu. Use .metalink file it provides with a tool like [https://aria2.github.io/ aria2]. Downloading this way is far faster; it will saturate a 100 MiB/s connection.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2153</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2153"/>
		<updated>2022-07-16T04:34:21Z</updated>

		<summary type="html">&lt;p&gt;Vruba: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3 OrbView-3 data].&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;br /&gt;
&lt;br /&gt;
[https://search.asf.alaska.edu/#/ Vertex] is a multi-source query, download, and limited analysis tool for SAR (synthetic aperture radar) data, operated by the University of Alaska and NASA. Most but not all significant public domain SAR datasets are available through it.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Log in.&lt;br /&gt;
# Draw a rectangle over your area of interest.&lt;br /&gt;
# Click the UPDATE button in the top bar (accepting the defaults). After the search runs, a list browser will pop up from the bottom.&lt;br /&gt;
# Click a SAR scene with a preview image in the left column of the list browser.&lt;br /&gt;
# Click the larger preview image that appears in the middle column (your cursor will change to a magnifying glass). Pan and zoom at will, then close the preview viewer.&lt;br /&gt;
# In the right column, click the download icon (the cloud with an arrow) next to L1 Detected High-Res Dual-Pol (GRD-HD).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* Sentinel-1 data is the backbone of Vertex’s datasets: dual-polarization, lowish-resolution, land-oriented, survey-mode SAR data.&lt;br /&gt;
* SeaSat data! Coverage is [https://asf.alaska.edu/data-sets/sar-data-sets/seasat/seasat-swath-coverage-maps/ very sparse], but it’s of historical significance, and if you ever want to know what your AOI looked like in L-band SAR in 1978, good luck finding a better source.&lt;br /&gt;
* [https://docs.asf.alaska.edu/api/basics/ Good API] access.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Direct download is fairly slow.&lt;br /&gt;
* Access to some datasets requires authorization (i.e., showing that it’s part of a credible research project and potentially agreeing to a no-redistribution license).&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* If you’re on reasonably good broadband, the direct download option is not optimal. Instead, use the “shopping cart” feature. Add your items to the cart, then select “Metadata (metalink)” from the shopping cart’s Data Download menu. Use .metalink file it provides with a tool like [https://aria2.github.io/ aria2]. Downloading this way is far faster; it will saturate a 100 MiB/s connection.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2152</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2152"/>
		<updated>2022-07-16T04:10:13Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Link format for real this time&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3 OrbView-3 data].&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2151</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2151"/>
		<updated>2022-07-16T04:09:49Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Link format&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3).&lt;br /&gt;
 OrbView-3 data].&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2150</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2150"/>
		<updated>2022-07-16T04:07:05Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Formatting and alphabetization&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [OrbView-3 data](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3).&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
* In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2149</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2149"/>
		<updated>2022-07-16T04:06:22Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Adding Worldview&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [OrbView-3 data](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3).&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
- In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;br /&gt;
&lt;br /&gt;
== Worldview ==&lt;br /&gt;
&lt;br /&gt;
[https://worldview.earthdata.nasa.gov/ Worldview] is a multi-source visualization, query, and download tool operated by NASA EOSDIS. (It should not be confused with World'''V'''iew, a series of commercial high-resolution satellites from Maxar.) It gives access to most US federally produced remote sensing data from polar satellites with daily revisits. In general these are low-resolution sensors (on the order of 1 km/px) that are focused on topics related to the weather, climate, and environment, for example creating fire maps.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public; some features may require a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Example workflow ===&lt;br /&gt;
&lt;br /&gt;
# Click the orange + Add Layers button on the left sidebar.&lt;br /&gt;
# Navigate to Air Quality → Fires and Thermal Anomalies and click it.&lt;br /&gt;
# Click the checkbox by Aqua and Terra/MODIS → Fires and Thermal Anomalies (Day and Night).&lt;br /&gt;
# Close the dataset chooser overlay (returning to the map view).&lt;br /&gt;
# Move the date playhead (arrow) at the bottom of the window to yesterday or before (to allow time for data processing).&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* “Earth at Night” data: Suomi NPP/VIIRS Day/Night Band and its derivatives, which are low-res but allow quick insight into events like power outages, gas field operations, etc.&lt;br /&gt;
* Some access to geostationary imagery.&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Although it has significant download capability, it’s generally more oriented toward in-browser visualization.&lt;br /&gt;
* Very little data available through Worldview is sharper than about 250 m/pixel.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* The main polar imaging instruments, MODIS on the Aqua and Terra satellites and VIIRS on Suomi NPP and NOAA-20, provide a richly multispectral 4-frame animation over midday for basically everywhere that is not in polar night or the gap between swaths. Although it’s low-res, it can be valuable for, e.g., using cloud configurations to pin down times of images.&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
	<entry>
		<id>https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2148</id>
		<title>Free remote sensing data sources</title>
		<link rel="alternate" type="text/html" href="https://wonkpedia.org/mediawiki/index.php?title=Free_remote_sensing_data_sources&amp;diff=2148"/>
		<updated>2022-07-16T03:45:07Z</updated>

		<summary type="html">&lt;p&gt;Vruba: Initial copy and some stubs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== EarthExplorer ==&lt;br /&gt;
&lt;br /&gt;
[https://earthexplorer.usgs.gov/ EarthExplorer] is a multi-source query and download tool operated by the US Geological Survey. It gives access to most US federally produced optical remote sensing data from aerial sources and from satellites with less than daily revisit. As a rule of thumb, it has more archival and high to medium resolution data, while WorldView has more realtime and low resolution data.&lt;br /&gt;
&lt;br /&gt;
=== Access requirements ===&lt;br /&gt;
&lt;br /&gt;
Public with a (free) [https://urs.earthdata.nasa.gov/ Earthdata account].&lt;br /&gt;
&lt;br /&gt;
=== Basic instructions ===&lt;br /&gt;
&lt;br /&gt;
# Create an Earthdata account and log in.&lt;br /&gt;
# In the Search Criteria tab, drop a pin on your site of interest.&lt;br /&gt;
# Select something in the Data Sets tab, for example Landsat → Landsat Collection 2 Level-1 → Landsat 8-9 OLI/TIRS C2 L1.&lt;br /&gt;
# Click the Results tab. In the listing there, next to a scene that you like, click the footprint icon to see it previewed at low resolution on the map, or click the download icon next to it.&lt;br /&gt;
&lt;br /&gt;
=== Highlights ===&lt;br /&gt;
&lt;br /&gt;
* '''The whole Landsat archive''': near-global medium resolution multispectral imagery back to 1972.&lt;br /&gt;
* '''Sentinel-2''' access: while it’s not the best S2 browser, it’s handy to have available for comparison to Landsat, etc.&lt;br /&gt;
* ISRO '''Resourcesat''' data – rarely the best available data in terms of resolution, but often overlooked, and sometimes able to fill a crucial gap in better-known sources.&lt;br /&gt;
* Excellent aerial (high resolution) coverage of the continental US, for example with NAIP.&lt;br /&gt;
* A scattershot, relatively small, yet occasionally useful collection of commercial imagery licensed for open redistribution. For example, coverage of the Korean Peninsula in the mid-2000s with [OrbView-3 data](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-commercial-satellites-orbview-3).&lt;br /&gt;
&lt;br /&gt;
=== Limits ===&lt;br /&gt;
&lt;br /&gt;
* Most data is US-only – can be tedious to wade through if you’re looking for something else.&lt;br /&gt;
* Download speeds are often low (a few MiB/second). If you’re fetching large datasets, it’s often best to use EE as a browse tool to find image IDs, then pull them from a faster endpoint (for example, Landsat on AWS).&lt;br /&gt;
* Frequently down for maintenance: typically several days/year.&lt;br /&gt;
&lt;br /&gt;
=== Tips ===&lt;br /&gt;
&lt;br /&gt;
* For advanced search queries, use the Additional Criteria, which uses metadata as available per dataset. For example, you might construct a query like “Landsat scenes that are less then 50% cloudy and from either June or July of any year between 1990 and 2010”.&lt;br /&gt;
- In the nested directory in the Data Sets tab, click the “i” icon to get a dataset description. Click the map icon to display the dataset’s footprint on the map, so you can tell whether it covers your general region of interest (though this may take a minute for a dataset with a complex footprint).&lt;br /&gt;
&lt;br /&gt;
== Sentinel Playground ==&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;/div&gt;</summary>
		<author><name>Vruba</name></author>
	</entry>
</feed>