Getting started

The xdatasets library enables users to effortlessly access a vast collection of earth observation datasets that are compatible with xarray formats.

The library adopts an opinionated approach to data querying and caters to the specific needs of certain user groups, such as hydrologists, climate scientists, and engineers. One of the functionalities of xdatasets is the ability to extract data at a specific location or within a designated region, such as a watershed or municipality, while also enabling spatial and temporal operations.

To use xdatasets, users must employ a query. For instance, a straightforward query to extract the variables t2m (2m temperature) and tp (Total precipitation) from the era5_reanalysis_single_levels dataset at two geographical positions (Montreal and Toronto) could be as follows:

query = {
    "datasets": {"era5_reanalysis_single_levels": {'variables': ["t2m", "tp"]}},
    "space": {
        "clip": "point", # bbox, point or polygon
        "geometry": {'Montreal' : (45.508888, -73.561668),
                     'Toronto' : (43.651070, -79.347015)
                    }
    }
}

An example of a more complex query would look like the one below.

Note Don’t worry! Below, you’ll find additional examples that will assist in understanding each parameter in the query, as well as the possible combinations.

This query calls the same variables as above. However, instead of specifying geographical positions, a GeoPandas.DataFrame is used to provide features (such as shapefiles or geojson) for extracting data within each of them. Each polygon is identified using the unique identifier Station, and a spatial average is computed within each one (aggregation: True). The dataset, initially at an hourly time step, is converted into a daily time step while applying one or more temporal aggregations for each variable as prescribed in the query. xdatasets ultimately returns the dataset for the specified date range and time zone.

query = {
    "datasets": {"era5_reanalysis_single_levels": {'variables': ["t2m", "tp"]}},
    "space": {
        "clip": "polygon", # bbox, point or polygon
        "averaging": True, # spatial average of the variables within each polygon
        "geometry": gdf,
        "unique_id": "Station" # unique column name in geodataframe
    },
    "time": {
        "timestep": "D",
        "aggregation": {"tp": np.nansum,
                        "t2m": [np.nanmax, np.nanmin]},

        "start": '2000-01-01',
        "end": '2020-05-31',
        "timezone": 'America/Montreal',
    },
}

Query climate datasets

In order to use xdatasets, you must import at least xdatasets, pandas, geopandas, and numpy. Additionally, we import pathlib to interact with files.

[1]:
import os
import warnings
from pathlib import Path

warnings.simplefilter("ignore")

os.environ["USE_PYGEOS"] = "0"
import geopandas as gpd

# Visualization
import hvplot.pandas  # noqa
import hvplot.xarray  # noqa-
import numpy as np
import pandas as pd
import panel as pn  # noqa

import xdatasets as xd
ERROR 1: PROJ: proj_create_from_database: Open of /home/docs/checkouts/readthedocs.org/user_builds/xdatasets/conda/latest/share/proj failed

Clip by points (sites)

To begin with, we need to create a dictionary of sites and their corresponding geographical coordinates.

[2]:
sites = {
    "Montreal": (45.508888, -73.561668),
    "New York": (40.730610, -73.935242),
    "Miami": (25.761681, -80.191788),
}

We will then extract the tp (total precipitation) and t2m (2m temperature) from the era5_reanalysis_single_levels dataset for the designated sites. Afterward, we will convert the time step to daily and adjust the timezone to Eastern Time. Finally, we will limit the temporal interval.

Before proceeding with this first query, let’s quickly outline the role of each parameter:

  • datasets: A dictionary where datasets serve as keys and desired variables as values.

  • space: A dictionary that defines the necessary spatial operations to apply on user-supplied geographic features.

  • time: A dictionary that defines the necessary temporal operations to apply on the datasets

For more information on each parameter, consult the API documentation.

This is what the requested query looks like :

[3]:
query = {
    "datasets": "era5_reanalysis_single_levels",
    "space": {"clip": "point", "geometry": sites},  # bbox, point or polygon
    "time": {
        "timestep": "D",  # http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
        "aggregation": {"tp": np.nansum, "t2m": np.nanmean},
        "start": "1995-01-01",
        "end": "2000-12-31",
        "timezone": "America/Montreal",
    },
}
xds = xd.Query(**query)
Temporal operations: processing tp with era5_reanalysis_single_levels: 100%|██████████| 2/2 [00:00<00:00,  2.86it/s]

By accessing the data attribute, you can view the data obtained from the query. It’s worth noting that the variable name tp has been updated to tp_nansum to reflect the reduction operation (np.nansum) that was utilized to convert the time step from hourly to daily. Likewise, t2m was updated to t2m_nanmean.

[4]:
xds.data
[4]:
<xarray.Dataset> Size: 70kB
Dimensions:      (spatial_agg: 1, timestep: 1, site: 3, time: 2192, source: 1)
Coordinates:
  * spatial_agg  (spatial_agg) object 8B 'point'
  * timestep     (timestep) object 8B 'D'
    latitude     (site) float64 24B 45.5 40.75 25.75
    longitude    (site) float64 24B -73.5 -74.0 -80.25
  * site         (site) <U8 96B 'Montreal' 'New York' 'Miami'
  * time         (time) datetime64[ns] 18kB 1995-01-01 1995-01-02 ... 2000-12-31
  * source       (source) <U29 116B 'era5_reanalysis_single_levels'
Data variables:
    t2m_nanmean  (spatial_agg, timestep, time, site) float32 26kB 266.9 ... 2...
    tp_nansum    (spatial_agg, timestep, time, site) float32 26kB 0.007034 .....
Attributes: (12/30)
    GRIB_NV:                                  0
    GRIB_Nx:                                  1440
    GRIB_Ny:                                  721
    GRIB_cfName:                              unknown
    GRIB_cfVarName:                           t2m
    GRIB_dataType:                            an
    ...                                       ...
    GRIB_totalNumber:                         0
    GRIB_typeOfLevel:                         surface
    GRIB_units:                               K
    long_name:                                2 metre temperature
    standard_name:                            unknown
    units:                                    K
[5]:
title = f"Comparison of total precipitation across three cities in North America from \
{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}"

xds.data.sel(
    timestep="D",
    source="era5_reanalysis_single_levels",
).hvplot(
    title=title,
    x="time",
    y="tp_nansum",
    grid=True,
    width=750,
    height=450,
    by="site",
    legend="top",
    widget_location="bottom",
)
[5]:
[6]:
title = f"Comparison of 2m temperature across three cities in North America from \
{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}"

xds.data.sel(
    timestep="D",
    source="era5_reanalysis_single_levels",
).hvplot(
    title=title,
    x="time",
    y="t2m_nanmean",
    grid=True,
    width=750,
    height=450,
    by="site",
    legend="top",
    widget_location="bottom",
)
[6]:

Clip on polygons with no averaging in space

First, let’s explore specific polygon features. With xdatasets, you can access geographical datasets, such as watershed boundaries linked to streamflow stations. These datasets follow a nomenclature where they are named after the hydrological dataset, with "_polygons" appended. For example, if the hydrological dataset is named deh, its corresponding watershed boundaries dataset will be labeled deh_polygons. The query below retrieves all polygons for the deh_polygons dataset.

gdf = xd.Query(
    **{
        "datasets": "deh_polygons"
}).data

gdf

As the data is loaded into memory, the process of loading all polygons may take some time. To expedite this, we recommend employing filters, as illustrated below. It’s important to note that the filters are consistent for both hydrological and corresponding geographical datasets. Consequently, only watershed boundaries associated with existing hydrological data will be returned.

[7]:
import xdatasets as xd

gdf = xd.Query(
    **{
        "datasets": {
            "deh_polygons": {
                "id": ["0421*"],
            }
        }
    }
).data.reset_index()

gdf
[7]:
Station Superficie geometry
0 042102 623.479187 POLYGON ((-78.57120 46.70742, -78.57112 46.707...
1 042103 579.479614 POLYGON ((-78.49014 46.64514, -78.49010 46.645...

Let’s examine the geographic locations of the polygon features.

[8]:
gdf.hvplot(
    geo=True,
    tiles="ESRI",
    color="Station",
    alpha=0.8,
    width=750,
    height=450,
    legend="top",
    hover_cols=["Station", "Superficie"],
)
[8]:

The following query seeks the variables t2m and tp from the era5_reanalysis_single_levels dataset, covering the period between January 1, 1959, and September 30, 1961, for the three polygons mentioned earlier. It is important to note that as aggregation is set to False, no spatial averaging will be conducted, and a mask (raster) will be returned for each polygon.

[9]:
query = {
    "datasets": {"era5_reanalysis_single_levels": {"variables": ["t2m", "tp"]}},
    "space": {
        "clip": "polygon",  # bbox, point or polygon
        "averaging": False,  # spatial average of the variables within each polygon
        "geometry": gdf,
        "unique_id": "Station",  # unique column name in geodataframe
    },
    "time": {
        "start": "1959-01-01",
        "end": "1961-08-31",
    },
}

xds = xd.Query(**query)
Spatial operations: processing polygon 042103 with era5_reanalysis_single_levels: : 2it [00:00,  4.09it/s]

By accessing the data attribute, you can view the data obtained from the query. For each variable, the dimensions of time, latitude, longitude, and Station (the unique ID) are included. In addition, there is another variable called weights that is returned. This variable specifies the weight that should be assigned to each pixel if spatial averaging is conducted over a mask (polygon).

[10]:
xds.data
[10]:
<xarray.Dataset> Size: 2MB
Dimensions:    (Station: 2, time: 23376, latitude: 3, longitude: 2, source: 1)
Coordinates:
  * latitude   (latitude) float64 24B 46.25 46.5 46.75
  * longitude  (longitude) float64 16B -78.5 -78.25
  * time       (time) datetime64[ns] 187kB 1959-01-01 ... 1961-08-31T23:00:00
  * Station    (Station) object 16B '042102' '042103'
  * source     (source) <U29 116B 'era5_reanalysis_single_levels'
Data variables:
    t2m        (Station, time, latitude, longitude) float32 1MB 260.3 ... nan
    tp         (Station, time, latitude, longitude) float32 1MB nan nan ... nan
    weights    (Station, latitude, longitude) float64 96B 0.01953 0.1244 ... nan
Attributes:
    Conventions:               CF-1.6
    history:                   2022-11-10 02:03:41 GMT by grib_to_netcdf-2.25...
    pangeo-forge:inputs_hash:  c4e1de94d7bedf0a63629db8fa0633c03b7e266149e97f...
    pangeo-forge:recipe_hash:  66be9c20b44c1a0fca83c2dd2a6f147aecc5be14590f1f...
    pangeo-forge:version:      0.9.4

Weights are much easier to comprehend visually, so let’s examine the weights returned for the station 042102. Notice that when selecting a single feature (Station 042102 in this case), the shape of our spatial dimensions is reduced to a 3x2 pixel area (longitude x latitude) that encompasses the entire feature.

[11]:
station = "042102"

ds_station = xds.data.sel(Station=station)
ds_clipped = xds.bbox_clip(ds_station).squeeze()
ds_clipped
[11]:
<xarray.Dataset> Size: 1MB
Dimensions:    (time: 23376, latitude: 3, longitude: 2)
Coordinates:
  * latitude   (latitude) float64 24B 46.25 46.5 46.75
  * longitude  (longitude) float64 16B -78.5 -78.25
  * time       (time) datetime64[ns] 187kB 1959-01-01 ... 1961-08-31T23:00:00
    Station    <U6 24B '042102'
    source     <U29 116B 'era5_reanalysis_single_levels'
Data variables:
    t2m        (time, latitude, longitude) float32 561kB 260.3 259.9 ... nan
    tp         (time, latitude, longitude) float32 561kB nan nan nan ... 0.0 nan
    weights    (latitude, longitude) float64 48B 0.01953 0.1244 ... 0.0481 nan
Attributes:
    Conventions:               CF-1.6
    history:                   2022-11-10 02:03:41 GMT by grib_to_netcdf-2.25...
    pangeo-forge:inputs_hash:  c4e1de94d7bedf0a63629db8fa0633c03b7e266149e97f...
    pangeo-forge:recipe_hash:  66be9c20b44c1a0fca83c2dd2a6f147aecc5be14590f1f...
    pangeo-forge:version:      0.9.4
[12]:
(
    (
        ds_clipped.t2m.isel(time=0).hvplot(
            title="The 2m temperature for pixels that intersect with the polygon on January 1, 1959",
            tiles="ESRI",
            geo=True,
            alpha=0.6,
            colormap="isolum",
            width=750,
            height=450,
        )
        * gdf[gdf.Station == station].hvplot(
            geo=True,
            width=750,
            height=450,
            legend="top",
            hover_cols=["Station", "Superficie"],
        )
    )
    + ds_clipped.weights.hvplot(
        title="The weights that should be assigned to each pixel when performing spatial averaging",
        tiles="ESRI",
        alpha=0.6,
        colormap="isolum",
        geo=True,
        width=750,
        height=450,
    )
    * gdf[gdf.Station == station].hvplot(
        geo=True,
        width=750,
        height=450,
        legend="top",
        hover_cols=["Station", "Superficie"],
    )
).cols(1)
[12]:

The two plots depicted above show the 2m temperature for each pixel that intersects with the polygon from Station 042102 and the corresponding weights to be applied to each pixel. In the lower plot, it is apparent that the majority of the polygon is situated in the central pixels, which results in those pixels having a weight of approximately 80%. It is evident that the two lower and the upper pixels have much less intersection with the polygon, which results in their respective weights being smaller (hover on the plot to verify the weights).

In various libraries, either all pixels that intersect with the geometries are kept, or only pixels with centers within the polygon are retained. However, as shown in the previous example, utilizing such methods can introduce significant biases in the final calculations.

Clip on polygons with averaging in space

The following query seeks the variables t2m and tp from the era5_reanalysis_single_levels and era5_land_reanalysis datasets, covering the period between January 1, 2014, to December 31, 2023, for the three polygons mentioned earlier. Note that when the aggregation parameter is set to True, spatial averaging takes place. In addition, the weighted mask (raster) described earlier will be applied to generate a time series for each polygon.

Additional steps are carried out in the process, including converting the original hourly time step to a daily time step. During this conversion, various temporal aggregations will be applied to each variable and a conversion to the local time zone will take place.

Note If users prefer to pass multiple dictionaries instead of a single large one, the following format is also considered acceptable.

[13]:
datasets = {
    "era5_reanalysis_single_levels": {"variables": ["t2m", "tp"]},
    "era5_land_reanalysis": {"variables": ["t2m", "tp"]},
}
space = {
    "clip": "polygon",  # bbox, point or polygon
    "averaging": True,
    "geometry": gdf,  # 3 polygons
    "unique_id": "Station",
}
time = {
    "timestep": "D",
    "aggregation": {"tp": [np.nansum], "t2m": [np.nanmax, np.nanmin]},
    "start": "2014-01-01",
    "end": "2023-12-31",
    "timezone": "America/Montreal",
}

xds = xd.Query(datasets=datasets, space=space, time=time)
Spatial operations: processing polygon 042103 with era5_reanalysis_single_levels: : 2it [00:00,  3.92it/s]
Temporal operations: processing tp with era5_reanalysis_single_levels: 100%|██████████| 2/2 [00:01<00:00,  1.39it/s]
Spatial operations: processing polygon 042103 with era5_land_reanalysis: : 2it [00:00,  3.59it/s]
Temporal operations: processing tp with era5_land_reanalysis: 100%|██████████| 2/2 [00:01<00:00,  1.38it/s]
[14]:
xds.data
[14]:
<xarray.Dataset> Size: 380kB
Dimensions:      (spatial_agg: 1, timestep: 1, Station: 2, time: 3652, source: 2)
Coordinates:
  * spatial_agg  (spatial_agg) object 8B 'polygon'
  * timestep     (timestep) object 8B 'D'
  * Station      (Station) object 16B '042102' '042103'
  * time         (time) datetime64[ns] 29kB 2014-01-01 2014-01-02 ... 2023-12-31
  * source       (source) <U29 232B 'era5_land_reanalysis' 'era5_reanalysis_s...
Data variables:
    t2m_nanmax   (spatial_agg, timestep, Station, time, source) float64 117kB ...
    t2m_nanmin   (spatial_agg, timestep, Station, time, source) float64 117kB ...
    tp_nansum    (spatial_agg, timestep, Station, time, source) float64 117kB ...
Attributes: (12/30)
    GRIB_NV:                                  0
    GRIB_Nx:                                  1440
    GRIB_Ny:                                  721
    GRIB_cfName:                              unknown
    GRIB_cfVarName:                           t2m
    GRIB_dataType:                            an
    ...                                       ...
    GRIB_totalNumber:                         0
    GRIB_typeOfLevel:                         surface
    GRIB_units:                               K
    long_name:                                2 metre temperature
    standard_name:                            unknown
    units:                                    K
[15]:
(
    xds.data[["t2m_nanmax", "t2m_nanmin"]]
    .squeeze()
    .hvplot(
        x="time",
        groupby=["Station", "source"],
        width=750,
        height=400,
        grid=True,
        widget_location="bottom",
    )
)
[15]:

The resulting dataset can be explored for the total_precipitation (tp) data attribute :

[16]:
(
    xds.data[["tp_nansum"]]
    .squeeze()
    .hvplot(
        x="time",
        groupby=["Station", "source"],
        width=750,
        height=400,
        grid=True,
        widget_location="bottom",
        color="blue",
    )
)
[16]:

Bounding box (bbox) around polygons

The following query seeks the variable tp from the era5_land_reanalysis_dev dataset, covering the period between January 1, 1959, and December 31, 1970, for the bounding box that delimits the three polygons mentioned earlier.

Additional steps are carried out in the process, including converting to the local time zone.

[17]:
query = {
    "datasets": {"era5_land_reanalysis": {"variables": ["tp"]}},
    "space": {
        "clip": "bbox",  # bbox, point or polygon
        "geometry": gdf,
    },
    "time": {
        "start": "1969-01-01",
        "end": "1980-12-31",
        "timezone": "America/Montreal",
    },
}


xds = xd.Query(**query)
[18]:
xds.data
[18]:
<xarray.Dataset> Size: 42MB
Dimensions:    (time: 105192, latitude: 9, longitude: 11, source: 1)
Coordinates:
  * latitude   (latitude) float64 72B 46.9 46.8 46.7 46.6 ... 46.3 46.2 46.1
  * longitude  (longitude) float64 88B -78.9 -78.8 -78.7 ... -78.1 -78.0 -77.9
  * time       (time) datetime64[ns] 842kB 1969-01-01 ... 1980-12-31T23:00:00
  * source     (source) <U20 80B 'era5_land_reanalysis'
Data variables:
    tp         (time, latitude, longitude) float32 42MB 5.794e-05 ... 0.0
Attributes:
    pangeo-forge:inputs_hash:  861eac2eb1671a56f9adbc727045a000f2eca94a84846e...
    pangeo-forge:recipe_hash:  f6822e2550713b90d6c2fb3f484825d600ceda6ba16548...
    pangeo-forge:version:      0.9.4
    timezone:                  America/Montreal

Let’s find out which day (24-hour period) was the rainiest in the entire region for the data retrieved in previous cell.

[19]:
indexer = (
    xds.data.sel(source="era5_land_reanalysis")
    .tp.sum(["latitude", "longitude"])
    .rolling(time=24)
    .sum()
    .argmax("time")
    .values
)

xds.data.isel(time=indexer).time.dt.date.values.tolist()
[19]:
datetime.date(1980, 6, 20)

Let’s visualise the evolution of the hourly precipitation during that day. Note that each image (raster) delimits exactly the bounding box required to cover all polygons in the query. Please note that for full interactivity, running the code in a Jupyter Notebook is necessary.

[20]:
da = xds.data.tp.isel(time=slice(indexer - 24, indexer))
da = da.where(da > 0.0001, drop=True)

(da * 1000).squeeze().hvplot.quadmesh(
    width=750,
    height=450,
    geo=True,
    tiles="ESRI",
    groupby=["time"],
    legend="top",
    cmap="gist_ncar",
    widget_location="bottom",
    widget_type="scrubber",
    dynamic=False,
    clim=(0.01, 10),
)
Matplotlib is building the font cache; this may take a moment.
[20]:

Query hydrological datasets

Hydrological queries are still being tested and output format is likely to change. Stay tuned!

[21]:
query = {"datasets": "deh"}
xds = xd.Query(**query)
xds.data
[21]:
<xarray.Dataset> Size: 1GB
Dimensions:        (id: 745, variable: 2, spatial_agg: 2, timestep: 1,
                    time_agg: 1, source: 1, time: 60631)
Coordinates: (12/15)
    drainage_area  (id) float32 3kB dask.array<chunksize=(745,), meta=np.ndarray>
    end_date       (variable, id, spatial_agg, timestep, time_agg, source) datetime64[ns] 24kB dask.array<chunksize=(2, 745, 2, 1, 1, 1), meta=np.ndarray>
  * id             (id) object 6kB '010101' '010801' ... '104804' '120201'
    latitude       (id) float32 3kB dask.array<chunksize=(745,), meta=np.ndarray>
    longitude      (id) float32 3kB dask.array<chunksize=(745,), meta=np.ndarray>
    name           (id) object 6kB dask.array<chunksize=(745,), meta=np.ndarray>
    ...             ...
  * spatial_agg    (spatial_agg) object 16B 'point' 'watershed'
    start_date     (variable, id, spatial_agg, timestep, time_agg, source) datetime64[ns] 24kB dask.array<chunksize=(2, 745, 2, 1, 1, 1), meta=np.ndarray>
  * time           (time) datetime64[ns] 485kB 1860-01-01 ... 2025-12-31
  * time_agg       (time_agg) object 8B 'mean'
  * timestep       (timestep) object 8B 'D'
  * variable       (variable) object 16B 'level' 'streamflow'
Data variables:
    level          (id, time, variable, spatial_agg, timestep, time_agg, source) float32 723MB dask.array<chunksize=(1, 60631, 1, 1, 1, 1, 1), meta=np.ndarray>
    streamflow     (id, time, variable, spatial_agg, timestep, time_agg, source) float32 723MB dask.array<chunksize=(1, 60631, 1, 1, 1, 1, 1), meta=np.ndarray>
[22]:
ds = (
    xd.Query(
        **{
            "datasets": {
                "deh": {
                    "id": ["020*"],
                    "regulated": ["Natural"],
                    "variables": ["streamflow"],
                }
            },
            "time": {"start": "1970-01-01", "minimum_duration": (10 * 365, "d")},
        }
    )
    .data.squeeze()
    .load()
)

ds
[22]:
<xarray.Dataset> Size: 737kB
Dimensions:        (id: 7, time: 20454)
Coordinates: (12/15)
    drainage_area  (id) float32 28B 1.09e+03 1.09e+03 1.01e+03 ... 626.0 1.2e+03
    end_date       (id) datetime64[ns] 56B 1989-05-22 2006-10-13 ... 1996-08-13
  * id             (id) object 56B '020301' '020302' ... '020602' '020802'
    latitude       (id) float32 28B 48.77 48.77 48.83 48.81 48.98 48.98 49.2
    longitude      (id) float32 28B -64.52 -64.52 -64.63 ... -64.43 -64.7 -65.29
    name           (id) object 56B 'Saint' 'Saint' ... 'Dartmouth' 'Madeleine'
    ...             ...
    spatial_agg    <U9 36B 'watershed'
    start_date     (id) datetime64[ns] 56B 1979-05-15 1989-08-12 ... 1970-01-01
  * time           (time) datetime64[ns] 164kB 1970-01-01 ... 2025-12-31
    time_agg       <U4 16B 'mean'
    timestep       <U1 4B 'D'
    variable       <U10 40B 'streamflow'
Data variables:
    streamflow     (id, time) float32 573kB nan nan nan nan ... nan nan nan nan
[23]:
query = {"datasets": "hydat"}
xds = xd.Query(**query)
xds.data
[23]:
<xarray.Dataset> Size: 841GB
Dimensions:        (data_type: 2, id: 7881, spatial_agg: 2, timestep: 1,
                    time_agg: 1, latitude: 2800, longitude: 4680, time: 59413)
Coordinates: (12/15)
  * data_type      (data_type) <U5 40B 'flow' 'level'
    drainage_area  (id) float64 63kB dask.array<chunksize=(10,), meta=np.ndarray>
    end_date       (id, data_type, spatial_agg, timestep, time_agg) object 252kB dask.array<chunksize=(7881, 2, 2, 1, 1), meta=np.ndarray>
  * id             (id) <U7 221kB '01AA002' '01AD001' ... '11AF004' '11AF005'
  * latitude       (latitude) float64 22kB 85.0 84.97 84.95 ... 15.05 15.02
  * longitude      (longitude) float64 37kB -167.0 -167.0 ... -50.05 -50.02
    ...             ...
    source         (id) object 63kB dask.array<chunksize=(7881,), meta=np.ndarray>
  * spatial_agg    (spatial_agg) object 16B 'point' 'watershed'
    start_date     (id, data_type, spatial_agg, timestep, time_agg) object 252kB dask.array<chunksize=(7881, 2, 2, 1, 1), meta=np.ndarray>
  * time           (time) datetime64[ns] 475kB 1860-01-01 ... 2022-08-31
  * time_agg       (time_agg) <U4 16B 'mean'
  * timestep       (timestep) <U3 12B 'day'
Data variables:
    mask           (id, latitude, longitude) float64 826GB dask.array<chunksize=(1, 500, 500), meta=np.ndarray>
    value          (id, time, data_type, spatial_agg, timestep, time_agg) float64 15GB dask.array<chunksize=(10, 59413, 1, 1, 1, 1), meta=np.ndarray>