# Plotting Satellite Data

## Plotting Satellite Data

#### Questions¶

1. Where are current GOES data available?
2. How can satellite data be obtained with Siphon?
3. How can MetPy simplify metadata parsing?
4. How can maps of satellite data be made?

### Finding GOES Data¶

The first step is to find the satellite data. Normally, we would browse over to http://thredds.ucar.edu/thredds/ and find the top-level THREDDS Data Server (TDS) catalog. From there we can drill down to find satellite data products.

For current data, you could navigate to the Satellite Data directory, then GOES East Products and CloudAndMoistureImagery. There are subfolders for the CONUS, full disk, mesoscale sector images, and other products. In each of these is a folder for each channel of the ABI. In each channel there is a folder for every day in the approximately month-long rolling archive. As you can see, there are a massive amount of data coming down from GOES-16!

In the next section we'll be downloading the data in a pythonic way, so our first task is to build a URL that matches the URL we manually navigated to in the web browser. To make it as flexible as possible, we'll want to use variables for the sector name (CONUS, full-disk, mesoscale-1, etc.), the date, and the ABI channel number.

EXERCISE:
• Create variables named image_date, region, and channel. Assign them to today's date, the mesoscale-1 region, and ABI channel 8.
• Construct a string data_url from these variables and the URL we navigated to above.
• Verify that following your link will take you where you think it should.
• Change the extension from catalog.html to catalog.xml. What do you see?
In [1]:
from datetime import datetime

# Create variables for URL generation

# Construct the data_url string

# Print out your URL and verify it works!

SOLUTION
In [2]:
# %load solutions/data_url.py

# Cell content replaced by load magic replacement.
from datetime import datetime

# Create variables for URL generation
image_date = datetime.utcnow().date()
region = 'CONUS'
channel = 8

# We want to match something like:
# https://thredds-test.unidata.ucar.edu/thredds/catalog/satellite/goes16/GOES16/Mesoscale-1/Channel08/20181113/catalog.html

# Construct the data_url string
data_url = ('https://thredds.ucar.edu/thredds/catalog/satellite/goes/east/products/'
f'CloudAndMoistureImagery/{region}/Channel{channel:02d}/'
f'{image_date:%Y%m%d}/catalog.xml')

# Print out your URL and verify it works!
print(data_url)

https://thredds.ucar.edu/thredds/catalog/satellite/goes/east/products/CloudAndMoistureImagery/CONUS/Channel08/20200117/catalog.xml


### Accessing data with Siphon¶

We could download the files to our computers from the THREDDS web interface, but that can become tedious for downloading many files, requires us to store them on our computer, and does not lend itself to automation.

We can use Siphon to parse the catalog from the TDS. This provides us a nice programmatic way of accessing the data. We start by importing the TDSCatalog class from siphon and giving it the URL to the catalog we just surfed to manually. Note: Instead of giving it the link to the HTML catalog, we change the extension to XML, which asks the TDS for the XML version of the catalog. This is much better to work with in code. If you forget, the extension will be changed for you with a warning being issued from siphon.

We want to create a TDSCatalog object called cat that we can examine and use to get handles to work with the data.

In [3]:
from siphon.catalog import TDSCatalog

In [4]:
cat = TDSCatalog(data_url)


To find the latest file, we can look at the cat.datasets attribute. Let’s look at the first five datasets:

In [5]:
cat.datasets[:5]

Out[5]:
[OR_ABI-L2-CMIPC-M6C08_G16_s20200171511160_e20200171511160_c20200171511160.nc,
OR_ABI-L2-CMIPC-M6C08_G16_s20200171506160_e20200171506160_c20200171506160.nc,
OR_ABI-L2-CMIPC-M6C08_G16_s20200171501160_e20200171501160_c20200171501160.nc,
OR_ABI-L2-CMIPC-M6C08_G16_s20200171456160_e20200171456160_c20200171456160.nc,
OR_ABI-L2-CMIPC-M6C08_G16_s20200171451160_e20200171451160_c20200171451160.nc]
In [6]:
cat.datasets[0]

Out[6]:
OR_ABI-L2-CMIPC-M6C08_G16_s20200171511160_e20200171511160_c20200171511160.nc

We'll get the next to most recent dataset (sometimes the most recent will not have received all tiles yet) and store it as variable dataset. Note that we haven't actually downloaded or transferred any real data yet, just bits of metadata have been received from THREDDS and parsed by siphon.

In [7]:
dataset = cat.datasets[1]

In [8]:
print(dataset)

OR_ABI-L2-CMIPC-M6C08_G16_s20200171506160_e20200171506160_c20200171506160.nc


We're finally ready to get the actual data. We could download the file, then open that, but there is no need! We can use siphon to help us only get what we need and hold it in memory. Notice that we're using the XArray accessor which will make life much nicer that dealing with the raw netCDF (like we used to back in the days of early 2018).

In [9]:
ds = dataset.remote_access(use_xarray=True)


### Digging into and parsing the data¶

Now that we've got some data - let's see what we actually got our hands on.

In [10]:
ds

Out[10]:
<xarray.Dataset>
Dimensions:               (x: 2500, y: 1500)
Coordinates:
time                  datetime64[ns] ...
* y                     (y) float32 128212.0 128156.0 ... 44324.0 44268.0
* x                     (x) float32 -101332.0 -101276.0 ... 38556.0 38612.0
Data variables:
Sectorized_CMI        (y, x) float32 ...
fixedgrid_projection  int32 ...
Attributes:
title:                       Sectorized Cloud and Moisture Imagery for th...
ICD_version:                 GROUND SEGMENT (GS) TO ADVANCED WEATHER INTE...
Conventions:                 CF-1.6
channel_id:                  8
central_wavelength:          6.19
abi_mode:                    6
source_scene:                CONUS
periodicity:                 5.0
production_location:         WCDAS
product_name:                ECONUS-020-B12-M6C08
satellite_id:                GOES-16
product_center_latitude:     30.08300267372796
product_center_longitude:    -87.09695844824527
projection:                  Fixed Grid
bit_depth:                   12
source_spatial_resolution:   2.0
request_spatial_resolution:  2.0
start_date_time:             2020017150616
number_product_tiles:        15
product_tile_width:          512
product_tile_height:         512
product_rows:                1500
product_columns:             2500
pixel_x_size:                2.0
pixel_y_size:                2.0
satellite_latitude:          0.0
satellite_longitude:         -75.0
satellite_altitude:          35786023.0
created_by:                  ldm-alchemy
product_tiles_received:      15

Great, so we have an XArray Dataset object, something we've dealt with before! We also see that we have the coordinates time, y, and x as well as the data variables of Sectorized_CMI and the projection information. Using the MetPy accessor for xarray, we can pull out the data we need for plotting.

In [11]:
import metpy
dat = ds.metpy.parse_cf('Sectorized_CMI')
proj = dat.metpy.cartopy_crs
x = dat['x']
y = dat['y']


### Plotting the Data¶

To plot our data we'll be using CartoPy and matplotlib:

In [12]:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
%matplotlib inline


Let's first make a simple plot using CartoPy's built-in shapefiles.

In [13]:
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)

im = ax.imshow(dat, extent=(x.min(), x.max(), y.min(), y.max()), origin='upper')

EXERCISE:
• Look at the documentation for metpy.plots.colortables here and figure out how to set the colormap of the image. For this image, let's go with the WVCIMSS_r colormap as this is a mid-level water vapor image. To see all of the colortables that MetPy supports, check out this page
• BONUS: Use the MetPy add_timestamp method from metpy.plots to add a timestamp to the plot.
• DAILY DOUBLE: Using the start_date_time attribute on the dataset ds, change the call to add_timestamp to use that date and time and the pretext to say "GOES 16 Channel X".
In [14]:
#Import for colortables
from metpy.plots import colortables

# Import for the bonus exercise

# Make the image plot

SOLUTION
In [15]:
# %load solutions/sat_map.py

# Cell content replaced by load magic replacement.
#Import for colortables
from metpy.plots import colortables

# Import for the bonus exercise

fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)

im = ax.imshow(dat, extent=(x.min(), x.max(), y.min(), y.max()), origin='upper')

wv_cmap = colortables.get_colortable('WVCIMSS_r')
im.set_cmap(wv_cmap)

#Bonus
start_time = datetime.strptime(ds.start_date_time, '%Y%j%H%M%S')
add_timestamp(ax, time=start_time, pretext=f'GOES-16 Ch. {channel} ',
high_contrast=True, fontsize=16, y=0.01)

plt.show()


Colormapping in matplotlib (which backs CartoPy) is handled through two pieces:

• The norm (normalization) controls how data values are converted to floating point values in the range [0, 1]
• The colormap controls how values are converted from floating point values in the range [0, 1] to colors (think colortable)

Let's try to determine a good range of values for the normalization. We'll make a histogram to see the distribution of values in the data, then clip that range down to enhance contrast in the data visualization.

We use compressed to remove any masked elements before making our histogram (areas of space that are in the view of the satellite).

In [16]:
import matplotlib.pyplot as plt


In meteorology, we have many ‘standard’ colortables that have been used for certain types of data. We have included these in Metpy in the metpy.plots.ctables module. The WVCIMSS colormap is a direct conversion of a GEMPAK colormap. Let's remake that image with a range of 195 to 265 K. This was empirically determined to closely match other displays of water vapor data.

In [17]:
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)