Reading and writing files¶
xarray supports direct serialization and IO to several file formats, from simple Pickle files to the more flexible netCDF format (recommended).
Pickle¶
The simplest way to serialize an xarray object is to use Python’s built-in pickle module:
In [1]: import pickle
In [2]: ds = xr.Dataset({'foo': (('x', 'y'), np.random.rand(4, 5))},
...: coords={'x': [10, 20, 30, 40],
...: 'y': pd.date_range('2000-01-01', periods=5),
...: 'z': ('x', list('abcd'))})
...:
# use the highest protocol (-1) because it is way faster than the default
# text based pickle format
In [3]: pkl = pickle.dumps(ds, protocol=-1)
In [4]: pickle.loads(pkl)
Out[4]:
<xarray.Dataset>
Dimensions: (x: 4, y: 5)
Coordinates:
* x (x) int64 10 20 30 40
* y (y) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-04 2000-01-05
z (x) <U1 'a' 'b' 'c' 'd'
Data variables:
foo (x, y) float64 0.127 0.9667 0.2605 0.8972 ... 0.7768 0.5948 0.1376
Pickling is important because it doesn’t require any external libraries
and lets you use xarray objects with Python modules like
multiprocessing
or Dask. However, pickling is
not recommended for long-term storage.
Restoring a pickle requires that the internal structure of the types for the pickled data remain unchanged. Because the internal design of xarray is still being refined, we make no guarantees (at this point) that objects pickled with this version of xarray will work in future versions.
Dictionary¶
We can convert a Dataset
(or a DataArray
) to a dict using
to_dict()
:
In [5]: d = ds.to_dict()
In [6]: d
Out[6]:
{'coords': {'x': {'dims': ('x',), 'attrs': {}, 'data': [10, 20, 30, 40]},
'y': {'dims': ('y',),
'attrs': {},
'data': [datetime.datetime(2000, 1, 1, 0, 0),
datetime.datetime(2000, 1, 2, 0, 0),
datetime.datetime(2000, 1, 3, 0, 0),
datetime.datetime(2000, 1, 4, 0, 0),
datetime.datetime(2000, 1, 5, 0, 0)]},
'z': {'dims': ('x',), 'attrs': {}, 'data': ['a', 'b', 'c', 'd']}},
'attrs': {},
'dims': {'x': 4, 'y': 5},
'data_vars': {'foo': {'dims': ('x', 'y'),
'attrs': {},
'data': [[0.12696983303810094,
0.966717838482003,
0.26047600586578334,
0.8972365243645735,
0.37674971618967135],
[0.33622174433445307,
0.45137647047539964,
0.8402550832613813,
0.12310214428849964,
0.5430262020470384],
[0.37301222522143085,
0.4479968246859435,
0.12944067971751294,
0.8598787065799693,
0.8203883631195572],
[0.35205353914802473,
0.2288873043216132,
0.7767837505077176,
0.5947835894851238,
0.1375535565632705]]}}}
We can create a new xarray object from a dict using
from_dict()
:
In [7]: ds_dict = xr.Dataset.from_dict(d)
In [8]: ds_dict
Out[8]:
<xarray.Dataset>
Dimensions: (x: 4, y: 5)
Coordinates:
* x (x) int64 10 20 30 40
* y (y) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-04 2000-01-05
z (x) <U1 'a' 'b' 'c' 'd'
Data variables:
foo (x, y) float64 0.127 0.9667 0.2605 0.8972 ... 0.7768 0.5948 0.1376
Dictionary support allows for flexible use of xarray objects. It doesn’t require external libraries and dicts can easily be pickled, or converted to json, or geojson. All the values are converted to lists, so dicts might be quite large.
To export just the dataset schema, without the data itself, use the
data=False
option:
In [9]: ds.to_dict(data=False)
Out[9]:
{'coords': {'x': {'dims': ('x',),
'attrs': {},
'dtype': 'int64',
'shape': (4,)},
'y': {'dims': ('y',), 'attrs': {}, 'dtype': 'datetime64[ns]', 'shape': (5,)},
'z': {'dims': ('x',), 'attrs': {}, 'dtype': '<U1', 'shape': (4,)}},
'attrs': {},
'dims': {'x': 4, 'y': 5},
'data_vars': {'foo': {'dims': ('x', 'y'),
'attrs': {},
'dtype': 'float64',
'shape': (4, 5)}}}
This can be useful for generating indices of dataset contents to expose to search indices or other automated data discovery tools.
netCDF¶
The recommended way to store xarray data structures is netCDF, which
is a binary file format for self-described datasets that originated
in the geosciences. xarray is based on the netCDF data model, so netCDF files
on disk directly correspond to Dataset
objects.
NetCDF is supported on almost all platforms, and parsers exist for the vast majority of scientific programming languages. Recent versions of netCDF are based on the even more widely used HDF5 file-format.
Tip
If you aren’t familiar with this data format, the netCDF FAQ is a good place to start.
Reading and writing netCDF files with xarray requires scipy or the netCDF4-Python library to be installed (the later is required to read/write netCDF V4 files and use the compression options described below).
We can save a Dataset to disk using the
Dataset.to_netcdf
method:
In [10]: ds.to_netcdf('saved_on_disk.nc')
By default, the file is saved as netCDF4 (assuming netCDF4-Python is
installed). You can control the format and engine used to write the file with
the format
and engine
arguments.
We can load netCDF files to create a new Dataset using
open_dataset()
:
In [11]: ds_disk = xr.open_dataset('saved_on_disk.nc')
In [12]: ds_disk
Out[12]:
<xarray.Dataset>
Dimensions: (x: 4, y: 5)
Coordinates:
* x (x) int64 10 20 30 40
* y (y) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-04 2000-01-05
z (x) object ...
Data variables:
foo (x, y) float64 ...
Similarly, a DataArray can be saved to disk using the
DataArray.to_netcdf
method, and loaded
from disk using the open_dataarray()
function. As netCDF files
correspond to Dataset
objects, these functions internally
convert the DataArray
to a Dataset
before saving, and then convert back
when loading, ensuring that the DataArray
that is loaded is always exactly
the same as the one that was saved.
A dataset can also be loaded or written to a specific group within a netCDF
file. To load from a group, pass a group
keyword argument to the
open_dataset
function. The group can be specified as a path-like
string, e.g., to access subgroup ‘bar’ within group ‘foo’ pass
‘/foo/bar’ as the group
argument. When writing multiple groups in one file,
pass mode='a'
to to_netcdf
to ensure that each call does not delete the
file.
Data is always loaded lazily from netCDF files. You can manipulate, slice and subset Dataset and DataArray objects, and no array values are loaded into memory until you try to perform some sort of actual computation. For an example of how these lazy arrays work, see the OPeNDAP section below.
It is important to note that when you modify values of a Dataset, even one linked to files on disk, only the in-memory copy you are manipulating in xarray is modified: the original file on disk is never touched.
Tip
xarray’s lazy loading of remote or on-disk datasets is often but not always
desirable. Before performing computationally intense operations, it is
often a good idea to load a Dataset (or DataArray) entirely into memory by
invoking the load()
method.
Datasets have a close()
method to close the associated
netCDF file. However, it’s often cleaner to use a with
statement:
# this automatically closes the dataset after use
In [13]: with xr.open_dataset('saved_on_disk.nc') as ds:
....: print(ds.keys())
....:
KeysView(<xarray.Dataset>
Dimensions: (x: 4, y: 5)
Coordinates:
* x (x) int64 10 20 30 40
* y (y) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-04 2000-01-05
z (x) object ...
Data variables:
foo (x, y) float64 ...)
Although xarray provides reasonable support for incremental reads of files on disk, it does not support incremental writes, which can be a useful strategy for dealing with datasets too big to fit into memory. Instead, xarray integrates with dask.array (see Parallel computing with Dask), which provides a fully featured engine for streaming computation.
It is possible to append or overwrite netCDF variables using the mode='a'
argument. When using this option, all variables in the dataset will be written
to the original netCDF file, regardless if they exist in the original dataset.
Reading encoded data¶
NetCDF files follow some conventions for encoding datetime arrays (as numbers
with a “units” attribute) and for packing and unpacking data (as
described by the “scale_factor” and “add_offset” attributes). If the argument
decode_cf=True
(default) is given to open_dataset
, xarray will attempt
to automatically decode the values in the netCDF objects according to
CF conventions. Sometimes this will fail, for example, if a variable
has an invalid “units” or “calendar” attribute. For these cases, you can
turn this decoding off manually.
You can view this encoding information (among others) in the
DataArray.encoding
and
DataArray.encoding
attributes:
In [14]: ds_disk['y'].encoding
Out[14]:
{'zlib': False,
'shuffle': False,
'complevel': 0,
'fletcher32': False,
'contiguous': True,
'chunksizes': None,
'source': 'saved_on_disk.nc',
'original_shape': (5,),
'dtype': dtype('int64'),
'units': 'days since 2000-01-01 00:00:00',
'calendar': 'proleptic_gregorian'}
In [15]: ds_disk.encoding
Out[15]:
{'unlimited_dims': set(),
'source': 'saved_on_disk.nc'}
Note that all operations that manipulate variables other than indexing will remove encoding information.
Writing encoded data¶
Conversely, you can customize how xarray writes netCDF files on disk by
providing explicit encodings for each dataset variable. The encoding
argument takes a dictionary with variable names as keys and variable specific
encodings as values. These encodings are saved as attributes on the netCDF
variables on disk, which allows xarray to faithfully read encoded data back into
memory.
It is important to note that using encodings is entirely optional: if you do not
supply any of these encoding options, xarray will write data to disk using a
default encoding, or the options in the encoding
attribute, if set.
This works perfectly fine in most cases, but encoding can be useful for
additional control, especially for enabling compression.
In the file on disk, these encodings as saved as attributes on each variable, which allow xarray and other CF-compliant tools for working with netCDF files to correctly read the data.
Scaling and type conversions¶
These encoding options work on any version of the netCDF file format:
dtype
: Any valid NumPy dtype or string convertable to a dtype, e.g.,'int16'
or'float32'
. This controls the type of the data written on disk._FillValue
: Values ofNaN
in xarray variables are remapped to this value when saved on disk. This is important when converting floating point with missing values to integers on disk, becauseNaN
is not a valid value for integer dtypes. As a default, variables with float types are attributed a_FillValue
ofNaN
in the output file, unless explicitly disabled with an encoding{'_FillValue': None}
.scale_factor
andadd_offset
: Used to convert from encoded data on disk to to the decoded data in memory, according to the formuladecoded = scale_factor * encoded + add_offset
.
These parameters can be fruitfully combined to compress discretized data on disk. For
example, to save the variable foo
with a precision of 0.1 in 16-bit integers while
converting NaN
to -9999
, we would use
encoding={'foo': {'dtype': 'int16', 'scale_factor': 0.1, '_FillValue': -9999}}
.
Compression and decompression with such discretization is extremely fast.
String encoding¶
xarray can write unicode strings to netCDF files in two ways:
- As variable length strings. This is only supported on netCDF4 (HDF5) files.
- By encoding strings into bytes, and writing encoded bytes as a character array. The default encoding is UTF-8.
By default, we use variable length strings for compatible files and fall-back
to using encoded character arrays. Character arrays can be selected even for
netCDF4 files by setting the dtype
field in encoding
to S1
(corresponding to NumPy’s single-character bytes dtype).
If character arrays are used, the string encoding that was used is stored on
disk in the _Encoding
attribute, which matches an ad-hoc convention
adopted by the netCDF4-Python library.
At the time of this writing (October 2017), a standard convention for indicating
string encoding for character arrays in netCDF files was
still under discussion.
Technically, you can use
any string encoding recognized by Python if you feel the need to deviate from UTF-8,
by setting the _Encoding
field in encoding
. But
we don’t recommend it.
Warning
Missing values in bytes or unicode string arrays (represented by NaN
in
xarray) are currently written to disk as empty strings ''
. This means
missing values will not be restored when data is loaded from disk.
This behavior is likely to change in the future (GH1647).
Unfortunately, explicitly setting a _FillValue
for string arrays to handle
missing values doesn’t work yet either, though we also hope to fix this in the
future.
Chunk based compression¶
zlib
, complevel
, fletcher32
, continguous
and chunksizes
can be used for enabling netCDF4/HDF5’s chunk based compression, as described
in the documentation for createVariable for netCDF4-Python. This only works
for netCDF4 files and thus requires using format='netCDF4'
and either
engine='netcdf4'
or engine='h5netcdf'
.
Chunk based gzip compression can yield impressive space savings, especially for sparse data, but it comes with significant performance overhead. HDF5 libraries can only read complete chunks back into memory, and maximum decompression speed is in the range of 50-100 MB/s. Worse, HDF5’s compression and decompression currently cannot be parallelized with dask. For these reasons, we recommend trying discretization based compression (described above) first.
Time units¶
The units
and calendar
attributes control how xarray serializes datetime64
and
timedelta64
arrays to datasets on disk as numeric values. The units
encoding
should be a string like 'days since 1900-01-01'
for datetime64
data or a string
like 'days'
for timedelta64
data. calendar
should be one of the calendar types
supported by netCDF4-python: ‘standard’, ‘gregorian’, ‘proleptic_gregorian’ ‘noleap’,
‘365_day’, ‘360_day’, ‘julian’, ‘all_leap’, ‘366_day’.
By default, xarray uses the ‘proleptic_gregorian’ calendar and units of the smallest time difference between values, with a reference time of the first time value.
Iris¶
The Iris tool allows easy reading of common meteorological and climate model formats
(including GRIB and UK MetOffice PP files) into Cube
objects which are in many ways very
similar to DataArray
objects, while enforcing a CF-compliant data model. If iris is
installed xarray can convert a DataArray
into a Cube
using
to_iris()
:
In [16]: da = xr.DataArray(np.random.rand(4, 5), dims=['x', 'y'],
....: coords=dict(x=[10, 20, 30, 40],
....: y=pd.date_range('2000-01-01', periods=5)))
....:
In [17]: cube = da.to_iris()
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-17-cca90e1ed038> in <module>()
----> 1 cube = da.to_iris()
/build/python-xarray-zsMXBT/python-xarray-0.12.1/xarray/core/dataarray.py in to_iris(self)
1900 """
1901 from ..convert import to_iris
-> 1902 return to_iris(self)
1903
1904 @classmethod
/build/python-xarray-zsMXBT/python-xarray-0.12.1/xarray/convert.py in to_iris(dataarray)
141 """
142 # Iris not a hard dependency
--> 143 import iris
144 from iris.fileformats.netcdf import parse_cell_methods
145
ModuleNotFoundError: No module named 'iris'
In [18]: cube