Getting started¶
The xdatasets
library enables users to effortlessly access a vast collection of earth observation datasets that are compatible with xarray
formats.
The library adopts an opinionated approach to data querying and caters to the specific needs of certain user groups, such as hydrologists, climate scientists, and engineers. One of the functionalities of xdatasets
is the ability to extract data at a specific location or within a designated region, such as a watershed or municipality, while also enabling spatial and temporal operations.
To use xdatasets
, users must employ a query. For instance, a straightforward query to extract the variables t2m
(2m temperature) and tp
(Total precipitation) from the era5_reanalysis_single_levels
dataset at two geographical positions (Montreal and Toronto) could be as follows:
query = {
"datasets": {"era5_reanalysis_single_levels": {'variables': ["t2m", "tp"]}},
"space": {
"clip": "point", # bbox, point or polygon
"geometry": {'Montreal' : (45.508888, -73.561668),
'Toronto' : (43.651070, -79.347015)
}
}
}
An example of a more complex query would look like the one below.
Note Don’t worry! Below, you’ll find additional examples that will assist in understanding each parameter in the query, as well as the possible combinations.
This query calls the same variables as above. However, instead of specifying geographical positions, a GeoPandas.DataFrame is used to provide features (such as shapefiles or geojson) for extracting data within each of them. Each polygon is identified using the unique identifier Station
, and a spatial average is computed within each one (aggregation: True)
. The dataset, initially at an hourly time step, is converted into a daily time step while applying one or more temporal aggregations
for each variable as prescribed in the query. xdatasets
ultimately returns the dataset for the specified date range and time zone.
query = {
"datasets": {"era5_reanalysis_single_levels": {'variables': ["t2m", "tp"]}},
"space": {
"clip": "polygon", # bbox, point or polygon
"averaging": True, # spatial average of the variables within each polygon
"geometry": gdf,
"unique_id": "Station" # unique column name in geodataframe
},
"time": {
"timestep": "D",
"aggregation": {"tp": np.nansum,
"t2m": [np.nanmax, np.nanmin]},
"start": '2000-01-01',
"end": '2020-05-31',
"timezone": 'America/Montreal',
},
}
Query climate datasets¶
In order to use xdatasets
, you must import at least xdatasets
, pandas
, geopandas
, and numpy
. Additionally, we import pathlib
to interact with files.
[1]:
import os
import warnings
from pathlib import Path
warnings.simplefilter("ignore")
os.environ["USE_PYGEOS"] = "0"
import geopandas as gpd
# Visualization
import hvplot.pandas # noqa
import hvplot.xarray # noqa-
import numpy as np
import pandas as pd
import panel as pn # noqa
import xdatasets as xd
Clip by points (sites)¶
To begin with, we need to create a dictionary of sites and their corresponding geographical coordinates.
[2]:
sites = {
"Montreal": (45.508888, -73.561668),
"New York": (40.730610, -73.935242),
"Miami": (25.761681, -80.191788),
}
We will then extract the tp
(total precipitation) and t2m
(2m temperature) from the era5_reanalysis_single_levels
dataset for the designated sites. Afterward, we will convert the time step to daily and adjust the timezone to Eastern Time. Finally, we will limit the temporal interval.
Before proceeding with this first query, let’s quickly outline the role of each parameter:
datasets: A dictionary where datasets serve as keys and desired variables as values.
space: A dictionary that defines the necessary spatial operations to apply on user-supplied geographic features.
time: A dictionary that defines the necessary temporal operations to apply on the datasets
For more information on each parameter, consult the API documentation.
This is what the requested query looks like :
[3]:
query = {
"datasets": "era5_reanalysis_single_levels",
"space": {"clip": "point", "geometry": sites}, # bbox, point or polygon
"time": {
"timestep": "D", # http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
"aggregation": {"tp": np.nansum, "t2m": np.nanmean},
"start": "1995-01-01",
"end": "2000-12-31",
"timezone": "America/Montreal",
},
}
xds = xd.Query(**query)
Temporal operations: processing tp with era5_reanalysis_single_levels: 100%|██████████| 2/2 [00:00<00:00, 7.12it/s]
By accessing the data
attribute, you can view the data obtained from the query. It’s worth noting that the variable name tp
has been updated to tp_nansum
to reflect the reduction operation (np.nansum
) that was utilized to convert the time step from hourly to daily. Likewise, t2m
was updated to t2m_nanmean
.
[4]:
xds.data
[4]:
<xarray.Dataset> Size: 70kB Dimensions: (spatial_agg: 1, timestep: 1, site: 3, time: 2192, source: 1) Coordinates: * spatial_agg (spatial_agg) object 8B 'point' * timestep (timestep) object 8B 'D' latitude (site) float64 24B 45.5 40.75 25.75 longitude (site) float64 24B -73.5 -74.0 -80.25 * site (site) <U8 96B 'Montreal' 'New York' 'Miami' * time (time) datetime64[ns] 18kB 1995-01-01 1995-01-02 ... 2000-12-31 * source (source) <U29 116B 'era5_reanalysis_single_levels' Data variables: t2m_nanmean (spatial_agg, timestep, time, site) float32 26kB 266.9 ... 2... tp_nansum (spatial_agg, timestep, time, site) float32 26kB 0.007034 ..... Attributes: (12/30) GRIB_NV: 0 GRIB_Nx: 1440 GRIB_Ny: 721 GRIB_cfName: unknown GRIB_cfVarName: t2m GRIB_dataType: an ... ... GRIB_totalNumber: 0 GRIB_typeOfLevel: surface GRIB_units: K long_name: 2 metre temperature standard_name: unknown units: K
[5]:
title = f"Comparison of total precipitation across three cities in North America from \
{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}"
xds.data.sel(
timestep="D",
source="era5_reanalysis_single_levels",
).hvplot(
title=title,
x="time",
y="tp_nansum",
grid=True,
width=750,
height=450,
by="site",
legend="top",
widget_location="bottom",
)
[5]:
[6]:
title = f"Comparison of 2m temperature across three cities in North America from \
{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}"
xds.data.sel(
timestep="D",
source="era5_reanalysis_single_levels",
).hvplot(
title=title,
x="time",
y="t2m_nanmean",
grid=True,
width=750,
height=450,
by="site",
legend="top",
widget_location="bottom",
)
[6]:
Clip on polygons with no averaging in space¶
First, let’s explore specific polygon features. With xdatasets
, you can access geographical datasets, such as watershed boundaries linked to streamflow stations. These datasets follow a nomenclature where they are named after the hydrological dataset, with "_polygons"
appended. For example, if the hydrological dataset is named deh
, its corresponding watershed boundaries dataset will be labeled deh_polygons
. The query below retrieves all polygons for the deh_polygons
dataset.
gdf = xd.Query(
**{
"datasets": "deh_polygons"
}).data
gdf
As the data is loaded into memory, the process of loading all polygons may take some time. To expedite this, we recommend employing filters, as illustrated below. It’s important to note that the filters are consistent for both hydrological and corresponding geographical datasets. Consequently, only watershed boundaries associated with existing hydrological data will be returned.
[7]:
[7]:
Station | Superficie | geometry | |
---|---|---|---|
0 | 042102 | 623.479187 | POLYGON ((-78.5712 46.70742, -78.57112 46.7073... |
1 | 042103 | 579.479614 | POLYGON ((-78.49014 46.64514, -78.4901 46.6450... |
Let’s examine the geographic locations of the polygon features.
[8]:
gdf.hvplot(
geo=True,
tiles="ESRI",
color="Station",
alpha=0.8,
width=750,
height=450,
legend="top",
hover_cols=["Station", "Superficie"],
)
[8]: