netCDF Writing

Unidata Logo

Writing netCDF data

Unidata Python Workshop

netCDF logo

Important Note: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error. Instead, choose "Run All" from the Cell menu after you modify a cell.

In [1]:
from netCDF4 import Dataset    # Note: python is case-sensitive!
import numpy as np

Opening a file, creating a new Dataset

Let's create a new, empty netCDF file named '' in our project root data directory, opened for writing.

Be careful, opening a file with 'w' will clobber any existing data (unless clobber=False is used, in which case an exception is raised if the file already exists).

  • mode='r' is the default.
  • mode='a' opens an existing file and allows for appending (does not clobber existing data)
  • format can be one of NETCDF3_CLASSIC, NETCDF3_64BIT, NETCDF4_CLASSIC or NETCDF4 (default). NETCDF4_CLASSIC uses HDF5 for the underlying storage layer (as does NETCDF4) but enforces the classic netCDF 3 data model so data can be read with older clients.
In [2]:
try: ncfile.close()  # just to be safe, make sure dataset is not already open.
except: pass
ncfile = Dataset('../../../data/',mode='w',format='NETCDF4_CLASSIC') 
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):

Creating dimensions

The ncfile object we created is a container for dimensions, variables, and attributes. First, let's create some dimensions using the createDimension method.

  • Every dimension has a name and a length.
  • The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the ncfile.dimensions dictionary.

Setting the dimension length to 0 or None makes it unlimited, so it can grow.

  • For NETCDF4 files, any variable's dimension can be unlimited.
  • For NETCDF4_CLASSIC and NETCDF3* files, only one per variable can be unlimited, and it must be the leftmost (slowest varying) dimension.
In [3]:
lat_dim = ncfile.createDimension('lat', 73)     # latitude axis
lon_dim = ncfile.createDimension('lon', 144)    # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).
for dim in ncfile.dimensions.items():
('lat', <class 'netCDF4._netCDF4.Dimension'>: name = 'lat', size = 73)
('lon', <class 'netCDF4._netCDF4.Dimension'>: name = 'lon', size = 144)
('time', <class 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 0)

Creating attributes

netCDF attributes can be created just like you would for any python object.

  • Best to adhere to established conventions (like the CF conventions)
  • We won't try to adhere to any specific convention here though.
In [4]:
ncfile.title='My model data'
My model data
In [5]:
ncfile.subtitle="My model data subtitle"
My model data subtitle
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
    title: My model data
    subtitle: My model data subtitle
    dimensions(sizes): lat(73), lon(144), time(0)

Try adding some more attributes...

Creating variables

Now let's add some variables and store some data in them.

  • A variable has a name, a type, a shape, and some data values.
  • The shape of a variable is specified by a tuple of dimension names.
  • A variable should also have some named attributes, such as 'units', that describe the data.

The createVariable method takes 3 mandatory args.

  • the 1st argument is the variable name (a string). This is used as the key to access the variable object from the variables dictionary.
  • the 2nd argument is the datatype (most numpy datatypes supported).
  • the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a NETCDF4 file, any unlimited dimension must be the leftmost one.
  • there are lots of optional arguments (many of which are only relevant when format='NETCDF4') to control compression, chunking, fill_value, etc.
In [6]:
# Define two variables with the same names as dimensions,
# a conventional way to define "coordinate variables".
lat = ncfile.createVariable('lat', np.float32, ('lat',))
lat.units = 'degrees_north'
lat.long_name = 'latitude'
lon = ncfile.createVariable('lon', np.float32, ('lon',))
lon.units = 'degrees_east'
lon.long_name = 'longitude'
time = ncfile.createVariable('time', np.float64, ('time',))
time.units = 'hours since 1800-01-01'
time.long_name = 'time'
# Define a 3D variable to hold the data
temp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost
temp.units = 'K' # degrees Kelvin
temp.standard_name = 'air_temperature' # this is a CF standard name
<class 'netCDF4._netCDF4.Variable'>
float64 temp(time, lat, lon)
    units: K
    standard_name: air_temperature
unlimited dimensions: time
current shape = (0, 73, 144)
filling on, default _FillValue of 9.969209968386869e+36 used

Pre-defined variable attributes (read only)

The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim.

Note: since no data has been written yet, the length of the 'time' dimension is 0.

In [7]:
print("-- Some pre-defined attributes for variable temp:")
print("temp.dimensions:", temp.dimensions)
print("temp.shape:", temp.shape)
print("temp.dtype:", temp.dtype)
print("temp.ndim:", temp.ndim)
-- Some pre-defined attributes for variable temp:
temp.dimensions: ('time', 'lat', 'lon')
temp.shape: (0, 73, 144)
temp.dtype: float64
temp.ndim: 3

Writing data

To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice.

In [8]:
nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3
# Write latitudes, longitudes.
# Note: the ":" is necessary in these "write" statements
lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole
lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward
# create a 3D array of random numbers
data_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))
# Write the data.  This writes the whole 3D netCDF variable all at once.
temp[:,:,:] = data_arr  # Appends data along unlimited dimension
print("-- Wrote data, temp.shape is now ", temp.shape)
# read data back from variable (by slicing it), print min and max
print("-- Min/Max values:", temp[:,:,:].min(), temp[:,:,:].max())
-- Wrote data, temp.shape is now  (3, 73, 144)
-- Min/Max values: 280.0008745095639 329.99854362017635
  • You can just treat a netCDF Variable object like a numpy array and assign values to it.
  • Variables automatically grow along unlimited dimensions (unlike numpy arrays)
  • The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.

Let's add another time slice....

In [9]:
# create a 2D array of random numbers
data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))
temp[3,:,:] = data_slice   # Appends the 4th time slice
print("-- Wrote more data, temp.shape is now ", temp.shape)
-- Wrote more data, temp.shape is now  (4, 73, 144)

Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable temp, but the data is missing.

In [10]:
times_arr = time[:]
print(type(times_arr),times_arr)  # dashes indicate masked values (where data has not yet been written)
<class 'netCDF4._netCDF4.Variable'>
float64 time(time)
    units: hours since 1800-01-01
    long_name: time
unlimited dimensions: time
current shape = (4,)
filling on, default _FillValue of 9.969209968386869e+36 used
<class ''> [-- -- -- --]

Let's add/write some data into the time variable.

  • Given a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable.
In [11]:
import datetime as dt
from netCDF4 import date2num,num2date
# 1st 4 days of October.
dates = [dt.datetime(2014,10,1,0),dt.datetime(2014,10,2,0),dt.datetime(2014,10,3,0),dt.datetime(2014,10,4,0)]
[datetime.datetime(2014, 10, 1, 0, 0), datetime.datetime(2014, 10, 2, 0, 0), datetime.datetime(2014, 10, 3, 0, 0), datetime.datetime(2014, 10, 4, 0, 0)]
In [12]:
times = date2num(dates, time.units)
print(times, time.units) # numeric values
[1882440. 1882464. 1882488. 1882512.] hours since 1800-01-01
In [13]:
time[:] = times
# read time data back, convert to datetime instances, check values.
[1882440. 1882464. 1882488. 1882512.]
hours since 1800-01-01
[cftime.DatetimeGregorian(2014, 10, 1, 0, 0, 0, 0)
 cftime.DatetimeGregorian(2014, 10, 2, 0, 0, 0, 0)
 cftime.DatetimeGregorian(2014, 10, 3, 0, 0, 0, 0)
 cftime.DatetimeGregorian(2014, 10, 4, 0, 0, 0, 0)]

Closing a netCDF file

It's important to close a netCDF file you opened for writing:

  • flushes buffers to make sure all data gets written
  • releases memory resources used by open netCDF files
In [14]:
# first print the Dataset object to see what we've got
# close the Dataset.
ncfile.close(); print('Dataset is closed!')
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
    title: My model data
    subtitle: My model data subtitle
    dimensions(sizes): lat(73), lon(144), time(4)
    variables(dimensions): float32 lat(lat), float32 lon(lon), float64 time(time), float64 temp(time, lat, lon)
Dataset is closed!
In [15]:
!ncdump -h ../../../data/
netcdf new {
	lat = 73 ;
	lon = 144 ;
	time = UNLIMITED ; // (4 currently)
	float lat(lat) ;
		lat:units = "degrees_north" ;
		lat:long_name = "latitude" ;
	float lon(lon) ;
		lon:units = "degrees_east" ;
		lon:long_name = "longitude" ;
	double time(time) ;
		time:units = "hours since 1800-01-01" ;
		time:long_name = "time" ;
	double temp(time, lat, lon) ;
		temp:units = "K" ;
		temp:standard_name = "air_temperature" ;

// global attributes:
		:title = "My model data" ;
		:subtitle = "My model data subtitle" ;

Advanced features

So far we've only exercised features associated with the old netCDF version 3 data model. netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer.

Let's create a new file with format='NETCDF4' so we can try out some of these features.

In [16]:
ncfile = Dataset('../../../data/','w',format='NETCDF4')
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4 data model, file format HDF5):

Creating Groups

netCDF version 4 added support for organizing data in hierarchical groups.

  • analagous to directories in a filesystem.
  • Groups serve as containers for variables, dimensions and attributes, as well as other groups.
  • A netCDF4.Dataset creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem.

  • groups are created using the createGroup method.

  • takes a single argument (a string, which is the name of the Group instance). This string is used as a key to access the group instances in the groups dictionary.

Here we create two groups to hold data for two different model runs.

In [17]:
grp1 = ncfile.createGroup('model_run1')
grp2 = ncfile.createGroup('model_run2')
for grp in ncfile.groups.items():
('model_run1', <class 'netCDF4._netCDF4.Group'>
group /model_run1:
    groups: )
('model_run2', <class 'netCDF4._netCDF4.Group'>
group /model_run2:
    groups: )

Create some dimensions in the root group.

In [18]:
lat_dim = ncfile.createDimension('lat', 73)     # latitude axis
lon_dim = ncfile.createDimension('lon', 144)    # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).

Now create a variable in grp1 and grp2. The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).

  • These variables are create with zlib compression, another nifty feature of netCDF 4.
  • The data are automatically compressed when data is written to the file, and uncompressed when the data is read.
  • This can really save disk space, especially when used in conjunction with the least_significant_digit keyword argument, which causes the data to be quantized (truncated) before compression. This makes the compression lossy, but more efficient.
In [19]:
temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
temp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)
for grp in ncfile.groups.items():  # shows that each group now contains 1 variable
('model_run1', <class 'netCDF4._netCDF4.Group'>
group /model_run1:
    variables(dimensions): float64 temp(time, lat, lon)
    groups: )
('model_run2', <class 'netCDF4._netCDF4.Group'>
group /model_run2:
    variables(dimensions): float64 temp(time, lat, lon)
    groups: )

Creating a variable with a compound data type

  • Compound data types map directly to numpy structured (a.k.a 'record' arrays).
  • Structured arrays are akin to C structs, or derived types in Fortran.
  • They allow for the construction of table-like structures composed of combinations of other data types, including other compound types.
  • Might be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data.

Here we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF).

In [20]:
# create complex128 numpy structured data type
complex128 = np.dtype([('real',np.float64),('imag',np.float64)])
# using this numpy dtype, create a netCDF compound data type object
# the string name can be used as a key to access the datatype from the cmptypes dictionary.
complex128_t = ncfile.createCompoundType(complex128,'complex128')
# create a variable with this data type, write some data to it.
cmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))
# write some data to this variable
# first create some complex random data
nlats = len(lat_dim); nlons = len(lon_dim)
data_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))
# write this complex data to a numpy complex128 structured array
data_arr = np.empty((nlats,nlons),complex128)
data_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag
cmplxvar[0] = data_arr  # write the data to the variable (appending to time dimension)
data_out = cmplxvar[0] # read one value of data back from variable
print(data_out.dtype, data_out.shape, data_out[0,0])
<class 'netCDF4._netCDF4.Variable'>
compound cmplx_var(time, lat, lon)
compound data type: {'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True}
path = /model_run1
unlimited dimensions: time
current shape = (1, 73, 144)
{'names':['real','imag'], 'formats':['<f8','<f8'], 'offsets':[0,8], 'itemsize':16, 'aligned':True} (73, 144) (0.87083764, 0.87627448)

Creating a variable with a variable-length (vlen) data type

netCDF 4 has support for variable-length or "ragged" arrays. These are arrays of variable length sequences having the same type.

  • To create a variable-length data type, use the createVLType method.
  • The numpy datatype of the variable-length sequences and the name of the new datatype must be specified.
In [21]:
vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')

A new variable can then be created using this datatype.

In [22]:
vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))

Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype object).

  • These are arrays whose elements are Python object pointers, and can contain any type of python object.
  • For this application, they must contain 1-D numpy arrays all of the same type but of varying length.
  • Fill with 1-D random numpy int64 arrays of random length between 1 and 10.
In [23]:
vlen_data = np.empty((nlats,nlons),object)
for i in range(nlons):
    for j in range(nlats):
        size = np.random.randint(1,10,size=1) # random length of sequence
        vlen_data[j,i] = np.random.randint(0,10,size=size).astype(vlen_t.dtype)# generate random sequence
vlvar[0] = vlen_data # append along unlimited dimension (time)
print('data =\n',vlvar[:])
<class 'netCDF4._netCDF4.Variable'>
vlen phony_vlen_var(time, lat, lon)
vlen data type: int64
path = /model_run2
unlimited dimensions: time
current shape = (1, 73, 144)
data =
 [[[array([4, 9]) array([1, 2, 3, 6, 5]) array([1, 3, 1, 9, 0]) ...
   array([8, 9, 1]) array([7]) array([2])]
  [array([8]) array([6, 3, 4]) array([9]) ...
   array([6, 0, 7, 8, 6, 3, 2, 9]) array([1]) array([0, 3, 6, 7])]
  [array([1, 2]) array([4, 0, 4, 9, 8, 3]) array([9, 9, 0, 1, 2]) ...
   array([5, 1, 4, 4, 6, 0]) array([9, 3, 6, 9, 5, 6, 3, 8, 5])
  [array([0, 3]) array([4, 0, 2, 6, 4, 0, 8])
   array([2, 7, 2, 8, 3, 1, 0, 0]) ... array([2, 9, 5, 7])
   array([4, 7, 0, 7, 1, 0, 1, 1]) array([7])]
  [array([6]) array([4, 5, 3, 3]) array([4, 9]) ...
   array([5, 5, 5, 6, 8, 6, 4]) array([4, 2, 7]) array([4, 1, 5, 9])]
  [array([3, 6, 2, 6, 6, 3, 2, 3, 8]) array([0, 2, 1])
   array([6, 2, 8, 2, 1, 5]) ... array([8, 1, 9, 5])
   array([7, 3, 4, 4, 0]) array([1, 3])]]]
/home/travis/miniconda/envs/unidata/lib/python3.7/site-packages/ DeprecationWarning: elementwise comparison failed; this will raise an error in the future.

Close the Dataset and examine the contents with ncdump.

In [24]:
!ncdump -h ../../../data/
netcdf new2 {
  compound complex128 {
    double real ;
    double imag ;
  }; // complex128
  int64(*) phony_vlen ;
	lat = 73 ;
	lon = 144 ;
	time = UNLIMITED ; // (1 currently)

group: model_run1 {
  	double temp(time, lat, lon) ;
  	complex128 cmplx_var(time, lat, lon) ;
  } // group model_run1

group: model_run2 {
  	double temp(time, lat, lon) ;
  	phony_vlen phony_vlen_var(time, lat, lon) ;
  } // group model_run2

Other interesting and useful projects using netcdf4-python

  • xarray: N-dimensional variant of the core pandas data structure that can operate on netcdf variables.
  • Iris: a data model to create a data abstraction layer which isolates analysis and visualisation code from data format specifics. Uses netcdf4-python to access netcdf data (can also handle GRIB).
  • Biggus: Virtual large arrays (from netcdf variables) with lazy evaluation.
  • cf-python: Implements the CF data model for the reading, writing and processing of data and metadata.