{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Getting started" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `xdatasets` library enables users to effortlessly access a vast collection of earth observation datasets that are compatible with `xarray` formats.\n", "\n", "The library adopts an opinionated approach to data querying and caters to the specific needs of certain user groups, such as hydrologists, climate scientists, and engineers. One of the functionalities of `xdatasets` is the ability to extract data at a specific location or within a designated region, such as a watershed or municipality, while also enabling spatial and temporal operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To use `xdatasets`, users must employ a query. For instance, a straightforward query to extract the variables `t2m` (*2m temperature*) and `tp` (*Total precipitation*) from the `era5_reanalysis_single_levels` dataset at two geographical positions (Montreal and Toronto) could be as follows:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "query = {\n", " \"datasets\": {\"era5_reanalysis_single_levels\": {'variables': [\"t2m\", \"tp\"]}},\n", " \"space\": {\n", " \"clip\": \"point\", # bbox, point or polygon\n", " \"geometry\": {'Montreal' : (45.508888, -73.561668),\n", " 'Toronto' : (43.651070, -79.347015)\n", " }\n", " }\n", "}\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An example of a more complex query would look like the one below. \n", "\n", "> **Note**\n", "> Don't worry! Below, you'll find additional examples that will assist in understanding each parameter in the query, as well as the possible combinations.\n", "\n", "This query calls the same variables as above. However, instead of specifying geographical positions, a GeoPandas.DataFrame is used to provide features (such as shapefiles or geojson) for extracting data within each of them. Each polygon is identified using the unique identifier `Station`, and a spatial average is computed within each one `(aggregation: True)`. The dataset, initially at an hourly time step, is converted into a daily time step while applying one or more temporal aggregations for each variable as prescribed in the query. `xdatasets` ultimately returns the dataset for the specified date range and time zone." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "query = {\n", " \"datasets\": {\"era5_reanalysis_single_levels\": {'variables': [\"t2m\", \"tp\"]}},\n", " \"space\": {\n", " \"clip\": \"polygon\", # bbox, point or polygon\n", " \"averaging\": True, # spatial average of the variables within each polygon\n", " \"geometry\": gdf,\n", " \"unique_id\": \"Station\" # unique column name in geodataframe\n", " },\n", " \"time\": {\n", " \"timestep\": \"D\",\n", " \"aggregation\": {\"tp\": np.nansum, \n", " \"t2m\": [np.nanmax, np.nanmin]},\n", " \n", " \"start\": '2000-01-01',\n", " \"end\": '2020-05-31',\n", " \"timezone\": 'America/Montreal',\n", " },\n", "}\n", "```\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Query climate datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to use `xdatasets`, you must import at least `xdatasets`, `pandas`, `geopandas`, and `numpy`. Additionally, we import `pathlib` to interact with files." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import warnings\n", "from pathlib import Path\n", "\n", "warnings.simplefilter(\"ignore\")\n", "\n", "os.environ[\"USE_PYGEOS\"] = \"0\"\n", "import geopandas as gpd\n", "\n", "# Visualization\n", "import hvplot.pandas # noqa\n", "import hvplot.xarray # noqa-\n", "import numpy as np\n", "import pandas as pd\n", "import panel as pn # noqa\n", "\n", "import xdatasets as xd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clip by points (sites)\n", "\n", "\n", "To begin with, we need to create a dictionary of sites and their corresponding geographical coordinates." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sites = {\n", " \"Montreal\": (45.508888, -73.561668),\n", " \"New York\": (40.730610, -73.935242),\n", " \"Miami\": (25.761681, -80.191788),\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will then extract the `tp` (*total precipitation*) and `t2m` (*2m temperature*) from the `era5_reanalysis_single_levels` dataset for the designated sites. Afterward, we will convert the time step to daily and adjust the timezone to Eastern Time. Finally, we will limit the temporal interval." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before proceeding with this first query, let's quickly outline the role of each parameter:\n", "\n", "- **datasets**: A dictionary where datasets serve as keys and desired variables as values.\n", "- **space**: A dictionary that defines the necessary spatial operations to apply on user-supplied geographic features.\n", "- **time**: A dictionary that defines the necessary temporal operations to apply on the datasets\n", "\n", "For more information on each parameter, consult the API documentation.\n", "\n", "This is what the requested query looks like :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query = {\n", " \"datasets\": \"era5_reanalysis_single_levels\",\n", " \"space\": {\"clip\": \"point\", \"geometry\": sites}, # bbox, point or polygon\n", " \"time\": {\n", " \"timestep\": \"D\", # http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases\n", " \"aggregation\": {\"tp\": np.nansum, \"t2m\": np.nanmean},\n", " \"start\": \"1995-01-01\",\n", " \"end\": \"2000-12-31\",\n", " \"timezone\": \"America/Montreal\",\n", " },\n", "}\n", "xds = xd.Query(**query)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By accessing the `data` attribute, you can view the data obtained from the query. It's worth noting that the variable name `tp` has been updated to `tp_nansum` to reflect the reduction operation (`np.nansum`) that was utilized to convert the time step from hourly to daily. Likewise, `t2m` was updated to `t2m_nanmean`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xds.data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "title = f\"Comparison of total precipitation across three cities in North America from \\\n", "{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}\"\n", "\n", "xds.data.sel(\n", " timestep=\"D\",\n", " source=\"era5_reanalysis_single_levels\",\n", ").hvplot(\n", " title=title,\n", " x=\"time\",\n", " y=\"tp_nansum\",\n", " grid=True,\n", " width=750,\n", " height=450,\n", " by=\"site\",\n", " legend=\"top\",\n", " widget_location=\"bottom\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "title = f\"Comparison of 2m temperature across three cities in North America from \\\n", "{xds.data.time.dt.year.min().values} to {xds.data.time.dt.year.max().values}\"\n", "\n", "xds.data.sel(\n", " timestep=\"D\",\n", " source=\"era5_reanalysis_single_levels\",\n", ").hvplot(\n", " title=title,\n", " x=\"time\",\n", " y=\"t2m_nanmean\",\n", " grid=True,\n", " width=750,\n", " height=450,\n", " by=\"site\",\n", " legend=\"top\",\n", " widget_location=\"bottom\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clip on polygons with no averaging in space\n", "\n", "First, let's explore specific polygon features. With `xdatasets`, you can access geographical datasets, such as watershed boundaries linked to streamflow stations. These datasets follow a nomenclature where they are named after the hydrological dataset, with `\"_polygons\"` appended. For example, if the hydrological dataset is named `deh`, its corresponding watershed boundaries dataset will be labeled `deh_polygons`. The query below retrieves all polygons for the `deh_polygons` dataset.\n", "\n", "```python\n", "gdf = xd.Query(\n", " **{\n", " \"datasets\": \"deh_polygons\"\n", "}).data\n", "\n", "gdf\n", "```\n", "\n", "As the data is loaded into memory, the process of loading all polygons may take some time. To expedite this, we recommend employing filters, as illustrated below. It's important to note that the filters are consistent for both hydrological and corresponding geographical datasets. Consequently, only watershed boundaries associated with existing hydrological data will be returned." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import xdatasets as xd\n", "\n", "gdf = xd.Query(\n", " **{\n", " \"datasets\": {\n", " \"deh_polygons\": {\n", " \"id\": [\"0421*\"],\n", " }\n", " }\n", " }\n", ").data.reset_index()\n", "\n", "gdf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's examine the geographic locations of the polygon features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gdf.hvplot(\n", " geo=True,\n", " tiles=\"ESRI\",\n", " color=\"Station\",\n", " alpha=0.8,\n", " width=750,\n", " height=450,\n", " legend=\"top\",\n", " hover_cols=[\"Station\", \"Superficie\"],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following query seeks the variables `t2m` and `tp` from the `era5_reanalysis_single_levels` dataset, covering the period between January 1, 1959, and September 30, 1961, for the three polygons mentioned earlier. It is important to note that as `aggregation` is set to `False`, no spatial averaging will be conducted, and a mask (raster) will be returned for each polygon." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query = {\n", " \"datasets\": {\"era5_reanalysis_single_levels\": {\"variables\": [\"t2m\", \"tp\"]}},\n", " \"space\": {\n", " \"clip\": \"polygon\", # bbox, point or polygon\n", " \"averaging\": False, # spatial average of the variables within each polygon\n", " \"geometry\": gdf,\n", " \"unique_id\": \"Station\", # unique column name in geodataframe\n", " },\n", " \"time\": {\n", " \"start\": \"1959-01-01\",\n", " \"end\": \"1961-08-31\",\n", " },\n", "}\n", "\n", "xds = xd.Query(**query)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By accessing the `data` attribute, you can view the data obtained from the query. For each variable, the dimensions of `time`, `latitude`, `longitude`, and `Station` (the unique ID) are included. In addition, there is another variable called `weights` that is returned. This variable specifies the weight that should be assigned to each pixel if spatial averaging is conducted over a mask (polygon)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xds.data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Weights are much easier to comprehend visually, so let's examine the weights returned for the station *042102*. Notice that when selecting a single feature (Station *042102* in this case), the shape of our spatial dimensions is reduced to a 3x2 pixel area (longitude x latitude) that encompasses the entire feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "station = \"042102\"\n", "\n", "ds_station = xds.data.sel(Station=station)\n", "ds_clipped = xds.bbox_clip(ds_station).squeeze()\n", "ds_clipped" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(\n", " (\n", " ds_clipped.t2m.isel(time=0).hvplot(\n", " title=\"The 2m temperature for pixels that intersect with the polygon on January 1, 1959\",\n", " tiles=\"ESRI\",\n", " geo=True,\n", " alpha=0.6,\n", " colormap=\"isolum\",\n", " width=750,\n", " height=450,\n", " )\n", " * gdf[gdf.Station == station].hvplot(\n", " geo=True,\n", " width=750,\n", " height=450,\n", " legend=\"top\",\n", " hover_cols=[\"Station\", \"Superficie\"],\n", " )\n", " )\n", " + ds_clipped.weights.hvplot(\n", " title=\"The weights that should be assigned to each pixel when performing spatial averaging\",\n", " tiles=\"ESRI\",\n", " alpha=0.6,\n", " colormap=\"isolum\",\n", " geo=True,\n", " width=750,\n", " height=450,\n", " )\n", " * gdf[gdf.Station == station].hvplot(\n", " geo=True,\n", " width=750,\n", " height=450,\n", " legend=\"top\",\n", " hover_cols=[\"Station\", \"Superficie\"],\n", " )\n", ").cols(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The two plots depicted above show the 2m temperature for each pixel that intersects with the polygon from Station `042102` and the corresponding weights to be applied to each pixel. In the lower plot, it is apparent that the majority of the polygon is situated in the central pixels, which results in those pixels having a weight of approximately 80%. It is evident that the two lower and the upper pixels have much less intersection with the polygon, which results in their respective weights being smaller (hover on the plot to verify the weights).\n", "\n", "In various libraries, either all pixels that intersect with the geometries are kept, or only pixels with centers within the polygon are retained. However, as shown in the previous example, utilizing such methods can introduce significant biases in the final calculations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clip on polygons with averaging in space\n", "\n", "The following query seeks the variables `t2m` and `tp` from the `era5_reanalysis_single_levels` and `era5_land_reanalysis` datasets, covering the period between January 1, 2014, to December 31, 2023, for the three polygons mentioned earlier. Note that when the `aggregation` parameter is set to `True`, spatial averaging takes place. In addition, the weighted mask (raster) described earlier will be applied to generate a time series for each polygon.\n", "\n", "Additional steps are carried out in the process, including converting the original hourly time step to a daily time step. During this conversion, various temporal aggregations will be applied to each variable and a conversion to the local time zone will take place.\n", "\n", "> **Note**\n", "> If users prefer to pass multiple dictionaries instead of a single large one, the following format is also considered acceptable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasets = {\n", " \"era5_reanalysis_single_levels\": {\"variables\": [\"t2m\", \"tp\"]},\n", " \"era5_land_reanalysis\": {\"variables\": [\"t2m\", \"tp\"]},\n", "}\n", "space = {\n", " \"clip\": \"polygon\", # bbox, point or polygon\n", " \"averaging\": True,\n", " \"geometry\": gdf, # 3 polygons\n", " \"unique_id\": \"Station\",\n", "}\n", "time = {\n", " \"timestep\": \"D\",\n", " \"aggregation\": {\"tp\": [np.nansum], \"t2m\": [np.nanmax, np.nanmin]},\n", " \"start\": \"2014-01-01\",\n", " \"end\": \"2023-12-31\",\n", " \"timezone\": \"America/Montreal\",\n", "}\n", "\n", "xds = xd.Query(datasets=datasets, space=space, time=time)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xds.data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(\n", " xds.data[[\"t2m_nanmax\", \"t2m_nanmin\"]]\n", " .squeeze()\n", " .hvplot(\n", " x=\"time\",\n", " groupby=[\"Station\", \"source\"],\n", " width=750,\n", " height=400,\n", " grid=True,\n", " widget_location=\"bottom\",\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resulting dataset can be explored for the `total_precipitation` (tp) data attribute :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(\n", " xds.data[[\"tp_nansum\"]]\n", " .squeeze()\n", " .hvplot(\n", " x=\"time\",\n", " groupby=[\"Station\", \"source\"],\n", " width=750,\n", " height=400,\n", " grid=True,\n", " widget_location=\"bottom\",\n", " color=\"blue\",\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Bounding box (bbox) around polygons\n", "\n", "The following query seeks the variable `tp` from the `era5_land_reanalysis_dev` dataset, covering the period between January 1, 1959, and December 31, 1970, for the bounding box that delimits the three polygons mentioned earlier.\n", "\n", "Additional steps are carried out in the process, including converting to the local time zone." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query = {\n", " \"datasets\": {\"era5_land_reanalysis\": {\"variables\": [\"tp\"]}},\n", " \"space\": {\n", " \"clip\": \"bbox\", # bbox, point or polygon\n", " \"geometry\": gdf,\n", " },\n", " \"time\": {\n", " \"start\": \"1969-01-01\",\n", " \"end\": \"1980-12-31\",\n", " \"timezone\": \"America/Montreal\",\n", " },\n", "}\n", "\n", "\n", "xds = xd.Query(**query)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xds.data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's find out which day (24-hour period) was the rainiest in the entire region for the data retrieved in previous cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "indexer = (\n", " xds.data.sel(source=\"era5_land_reanalysis\")\n", " .tp.sum([\"latitude\", \"longitude\"])\n", " .rolling(time=24)\n", " .sum()\n", " .argmax(\"time\")\n", " .values\n", ")\n", "\n", "xds.data.isel(time=indexer).time.dt.date.values.tolist()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's visualise the evolution of the hourly precipitation during that day. Note that each image (raster) delimits exactly the bounding box required to cover all polygons in the query. Please note that for full interactivity, running the code in a Jupyter Notebook is necessary.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "da = xds.data.tp.isel(time=slice(indexer - 24, indexer))\n", "da = da.where(da > 0.0001, drop=True)\n", "\n", "(da * 1000).squeeze().hvplot.quadmesh(\n", " width=750,\n", " height=450,\n", " geo=True,\n", " tiles=\"ESRI\",\n", " groupby=[\"time\"],\n", " legend=\"top\",\n", " cmap=\"gist_ncar\",\n", " widget_location=\"bottom\",\n", " widget_type=\"scrubber\",\n", " dynamic=False,\n", " clim=(0.01, 10),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Query hydrological datasets\n", "Hydrological queries are still being tested and output format is likely to change. Stay tuned!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "query = {\"datasets\": \"deh\"}\n", "xds = xd.Query(**query)\n", "xds.data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds = (\n", " xd.Query(\n", " **{\n", " \"datasets\": {\n", " \"deh\": {\n", " \"id\": [\"020*\"],\n", " \"regulated\": [\"Natural\"],\n", " \"variables\": [\"streamflow\"],\n", " }\n", " },\n", " \"time\": {\"start\": \"1970-01-01\", \"minimum_duration\": (10 * 365, \"d\")},\n", " }\n", " )\n", " .data.squeeze()\n", " .load()\n", ")\n", "\n", "ds" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "query = {\"datasets\": \"hydat\"}\n", "xds = xd.Query(**query)\n", "xds.data" ] } ], "metadata": { "celltoolbar": "Aucun(e)", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 4 }