From 292cb19d911db85d66ddde745f270d3b7b1f3461 Mon Sep 17 00:00:00 2001 From: lilianschuster Date: Thu, 19 Feb 2026 11:39:53 +0100 Subject: [PATCH 1/2] oggm v163 updates, + removing typos --- notebooks/10minutes/dynamical_spinup.ipynb | 31 ++++-- notebooks/10minutes/machine_learning.ipynb | 68 ++++++------- .../10minutes/preprocessed_directories.ipynb | 19 ++-- notebooks/10minutes/run_with_gcm.ipynb | 90 ++++++++++++----- .../tutorials/building_the_prepro_gdirs.ipynb | 28 +++--- .../tutorials/centerlines_to_shape.ipynb | 4 +- notebooks/tutorials/deal_with_errors.ipynb | 6 +- notebooks/tutorials/dem_sources.ipynb | 12 +-- notebooks/tutorials/distribute_flowline.ipynb | 8 +- notebooks/tutorials/dynamical_spinup.ipynb | 98 ++++++++++--------- .../elevation_bands_vs_centerlines.ipynb | 8 +- .../tutorials/full_prepro_workflow.ipynb | 20 ++-- notebooks/tutorials/holoviz_intro.ipynb | 28 +++--- .../ingest_gridded_data_on_flowlines.ipynb | 14 +-- notebooks/tutorials/inversion.ipynb | 6 +- notebooks/tutorials/ioggm.ipynb | 4 +- .../tutorials/kcalving_parameterization.ipynb | 4 +- .../tutorials/massbalance_calibration.ipynb | 81 ++++++++------- .../tutorials/massbalance_global_params.ipynb | 26 +++-- .../tutorials/massbalance_perturbation.ipynb | 19 ++-- .../merge_gcm_runs_and_visualize.ipynb | 34 +++---- notebooks/tutorials/numeric_solvers.ipynb | 10 +- ...served_thickness_with_dynamic_spinup.ipynb | 6 +- notebooks/tutorials/oggm_shop.ipynb | 12 +-- notebooks/tutorials/plot_mass_balance.ipynb | 30 +++--- .../tutorials/preprocessing_errors.ipynb | 10 +- notebooks/tutorials/rgitopo_rgi6.ipynb | 4 +- notebooks/tutorials/rgitopo_rgi7.ipynb | 4 +- .../run_with_a_spinup_and_gcm_data.ipynb | 2 +- .../store_and_compress_glacierdirs.ipynb | 14 +-- .../tutorials/use_your_own_inventory.ipynb | 20 ++-- .../tutorials/where_are_the_flowlines.ipynb | 18 +--- notebooks/tutorials/working_with_rgi.ipynb | 4 +- notebooks/welcome.ipynb | 2 +- 34 files changed, 396 insertions(+), 348 deletions(-) diff --git a/notebooks/10minutes/dynamical_spinup.ipynb b/notebooks/10minutes/dynamical_spinup.ipynb index 73096043..68749625 100644 --- a/notebooks/10minutes/dynamical_spinup.ipynb +++ b/notebooks/10minutes/dynamical_spinup.ipynb @@ -35,8 +35,16 @@ "\n", "# Locals\n", "import oggm.cfg as cfg\n", - "from oggm import utils, workflow, tasks, DEFAULT_BASE_URL\n", - "from oggm.shop import gcm_climate" + "from oggm import utils, workflow, DEFAULT_BASE_URL" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "DEFAULT_BASE_URL" ] }, { @@ -250,9 +258,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This is not really visible in the plots above, but the \"old\" method of initialisation in OGGM had another issue. It assumed dynamical steady state at the begining of the simulation (the RGI date), which was required by the bed inversion process. This could lead to artifacts (mainly in the glacier length and area, as well as velocities) during the first few years of the simulation. The dynamical spinup addresses this issue by starting the simulation in 1980. \n", + "This is not really visible in the plots above, but the \"old\" method of initialisation in OGGM had another issue. It assumed a dynamical steady state at the beginning of the simulation (the RGI date), which was required by the bed inversion process. This could lead to artifacts (mainly in the glacier length and area, as well as velocities) during the first few years of the simulation. The dynamical spinup addresses this issue by starting the simulation in 1980.\n", "\n", - "One of the way to see the importance of the spinup is to have a look at glacier velocities. Let's plot glacier volocities along the flowline in the year 2005 (the first year we have velocities from both the dynamical spinup, and without the spinup (\"cold start\" from an equilibrium):" + "One of the way to see the importance of the spinup is to have a look at glacier velocities. Let's plot glacier velocities along the flowline in the year 2005 (the first year we have velocities from both the dynamical spinup, and without the spinup (\"cold start\" from an equilibrium):" ] }, { @@ -262,15 +270,15 @@ "outputs": [], "source": [ "f = gdir.get_filepath('fl_diagnostics', filesuffix='_historical')\n", - "with xr.open_dataset(f, group=f'fl_0') as dg:\n", + "with xr.open_dataset(f, group='fl_0') as dg:\n", " dgno = dg.load()\n", "f = gdir.get_filepath('fl_diagnostics', filesuffix='_spinup_historical')\n", - "with xr.open_dataset(f, group=f'fl_0') as dg:\n", + "with xr.open_dataset(f, group='fl_0') as dg:\n", " dgspin = dg.load()\n", "\n", "year = 2005\n", - "dgno.ice_velocity_myr.sel(time=year).plot(label='No spinup');\n", - "dgspin.ice_velocity_myr.sel(time=year).plot(label='With spinup');\n", + "dgno.ice_velocity_myr.sel(time=year).plot(label='No spinup')\n", + "dgspin.ice_velocity_myr.sel(time=year).plot(label='With spinup')\n", "plt.title(f'Velocity along the flowline at year {year}'); plt.legend();" ] }, @@ -305,6 +313,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/10minutes/machine_learning.ipynb b/notebooks/10minutes/machine_learning.ipynb index 07fe5cfa..fe6a0408 100644 --- a/notebooks/10minutes/machine_learning.ipynb +++ b/notebooks/10minutes/machine_learning.ipynb @@ -13,7 +13,7 @@ "source": [ "In this notebook, we want to showcase what OGGM does best: **preparing data for your modelling workflow**.\n", "\n", - "We use preprocessed directories which contain most data available in [the OGGM shop](https://docs.oggm.org/en/stable/input-data.html) to illustrate how these could be used to inform data-based workflows. The data that is available in the shop and is show cased here, is more than is required for the regular OGGM workflow, which you will see in a bit." + "We use preprocessed directories which contain most data available in [the OGGM shop](https://docs.oggm.org/en/stable/input-data.html) to illustrate how these could be used to inform data-based workflows. The data that is available in the shop and is showcased here, is more than is required for the regular OGGM workflow, which you will see in a bit." ] }, { @@ -71,7 +71,7 @@ "# Local working directory (where OGGM will write its output)\n", "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_Toy_Thickness_Model')\n", "# We use the preprocessed directories with additional data in it: \"W5E5_w_data\" \n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_w_data/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_w_data/' ### TODO: update to 2025.6\n", "gdirs = workflow.init_glacier_directories(['RGI60-01.16195'], from_prepro_level=3, prepro_base_url=base_url, prepro_border=10)" ] }, @@ -147,7 +147,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Lets start with the [ITSLIVE](https://its-live.jpl.nasa.gov/#data) velocity data: " + "Let's start with the [ITSLIVE](https://its-live.jpl.nasa.gov/#data) velocity data:" ] }, { @@ -390,7 +390,7 @@ }, "outputs": [], "source": [ - "ds.slope.plot();\n", + "ds.slope.plot()\n", "plt.axis('equal');" ] }, @@ -422,7 +422,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Not convinced yet? Lets spend 10 more minutes to apply a (very simple) machine learning workflow " + "## Not convinced yet? Let's spend 10 more minutes to apply a (very simple) machine learning workflow" ] }, { @@ -491,7 +491,7 @@ "source": [ "geom = gdir.read_shapefile('outlines')\n", "f, ax = plt.subplots()\n", - "df.plot.scatter(x='x', y='y', c='thick', cmap='viridis', s=10, ax=ax);\n", + "df.plot.scatter(x='x', y='y', c='thick', cmap='viridis', s=10, ax=ax)\n", "geom.plot(ax=ax, facecolor='none', edgecolor='k');" ] }, @@ -524,7 +524,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here, we will keep them all and interpolate the variables of interest at the point's location. We use [xarray](http://xarray.pydata.org/en/stable/interpolation.html#advanced-interpolation) for this:" + "Here, we will keep them all and interpolate the variables of interest at the point's location. We use [xarray](https://xarray.pydata.org/en/stable/interpolation.html#advanced-interpolation) for this:" ] }, { @@ -575,8 +575,8 @@ "outputs": [], "source": [ "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", - "df.plot.scatter(x='dis_from_border', y='thick', ax=ax1); ax1.set_title('dis_from_border');\n", - "df.plot.scatter(x='slope', y='thick', ax=ax2); ax2.set_title('slope');\n", + "df.plot.scatter(x='dis_from_border', y='thick', ax=ax1); ax1.set_title('dis_from_border')\n", + "df.plot.scatter(x='slope', y='thick', ax=ax2); ax2.set_title('slope')\n", "df.plot.scatter(x='oggm_mb_above_z', y='thick', ax=ax3); ax3.set_title('oggm_mb_above_z');" ] }, @@ -584,7 +584,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There is a negative correlation with slope (as expected), a positive correlation with the mass-flux (oggm_mb_above_z), and a slight positive correlation with the distance from the glacier boundaries. There is also some correlaction with ice velocity, but not a strong one:" + "There is a negative correlation with slope (as expected), a positive correlation with the mass-flux (oggm_mb_above_z), and a slight positive correlation with the distance from the glacier boundaries. There is also some correlation with ice velocity, but not a strong one:" ] }, { @@ -596,7 +596,7 @@ "outputs": [], "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", - "df.plot.scatter(x='millan_v', y='thick', ax=ax1); ax1.set_title('millan_v');\n", + "df.plot.scatter(x='millan_v', y='thick', ax=ax1); ax1.set_title('millan_v')\n", "df.plot.scatter(x='itslive_v', y='thick', ax=ax2); ax2.set_title('itslive_v');" ] }, @@ -662,7 +662,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We now have 9 times less points, but the main features of the data remain unchanged:" + "We now have 9 times fewer points, but the main features of the data remain unchanged:" ] }, { @@ -674,8 +674,8 @@ "outputs": [], "source": [ "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", - "df_agg.plot.scatter(x='dis_from_border', y='thick', ax=ax1);\n", - "df_agg.plot.scatter(x='slope', y='thick', ax=ax2);\n", + "df_agg.plot.scatter(x='dis_from_border', y='thick', ax=ax1)\n", + "df_agg.plot.scatter(x='slope', y='thick', ax=ax2)\n", "df_agg.plot.scatter(x='oggm_mb_above_z', y='thick', ax=ax3);" ] }, @@ -702,7 +702,7 @@ "outputs": [], "source": [ "import seaborn as sns\n", - "plt.figure(figsize=(10, 8));\n", + "plt.figure(figsize=(10, 8))\n", "sns.heatmap(df.corr(), cmap='RdBu');" ] }, @@ -780,9 +780,9 @@ "source": [ "odf = df.copy()\n", "odf['thick_predicted'] = lasso_cv.predict(data.values)\n", - "f, ax = plt.subplots(figsize=(6, 6));\n", - "odf.plot.scatter(x='thick', y='thick_predicted', ax=ax);\n", - "plt.xlim([-25, 220]);\n", + "f, ax = plt.subplots(figsize=(6, 6))\n", + "odf.plot.scatter(x='thick', y='thick_predicted', ax=ax)\n", + "plt.xlim([-25, 220])\n", "plt.ylim([-25, 220]);" ] }, @@ -848,7 +848,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The fact that the hyper-parameter alpha and the score change that much between iterations is a sign that the model isn't very robust." + "The fact that the hyperparameter alpha and the score change that much between iterations is a sign that the model isn't very robust." ] }, { @@ -922,7 +922,7 @@ "- we used two methods to extract these data at point locations: with interpolation or with aggregated averages on each grid point\n", "- as an application example, we trained a linear regression model to predict the ice thickness of this glacier at unseen locations\n", "\n", - "The model we developed was quite bad and we used quite lousy statistics. If I had more time to make it better, I would:\n", + "The model we developed was quite bad, and we used quite lousy statistics. If I had more time to make it better, I would:\n", "- make a pre-selection of meaningful predictors to avoid discontinuities\n", "- use a non-linear model\n", "- use cross-validation to better asses the true skill of the model\n", @@ -956,7 +956,7 @@ }, "outputs": [], "source": [ - "# Write our thinckness estimates back to disk\n", + "# Write our thickness estimates back to disk\n", "ds.to_netcdf(gdir.get_filepath('gridded_data'))\n", "# Distribute OGGM thickness using default values only\n", "workflow.execute_entity_task(tasks.distribute_thickness_per_altitude, gdirs);" @@ -994,11 +994,11 @@ }, "outputs": [], "source": [ - "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10));\n", - "ds['linear_model_thick'].plot(ax=ax1); ax1.set_title('Statistical model');\n", - "ds['distributed_thickness'].plot(ax=ax2); ax2.set_title('OGGM');\n", - "ds['millan_ice_thickness'].where(ds.glacier_mask).plot(ax=ax3); ax3.set_title('Millan 2022');\n", - "ds['consensus_ice_thickness'].plot(ax=ax4); ax4.set_title('Farinotti 2019');\n", + "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10))\n", + "ds['linear_model_thick'].plot(ax=ax1); ax1.set_title('Statistical model')\n", + "ds['distributed_thickness'].plot(ax=ax2); ax2.set_title('OGGM')\n", + "ds['millan_ice_thickness'].where(ds.glacier_mask).plot(ax=ax3); ax3.set_title('Millan 2022')\n", + "ds['consensus_ice_thickness'].plot(ax=ax4); ax4.set_title('Farinotti 2019')\n", "plt.tight_layout();" ] }, @@ -1010,14 +1010,14 @@ }, "outputs": [], "source": [ - "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10));\n", - "df_agg.plot.scatter(x='thick', y='linear_model_thick', ax=ax1);\n", - "ax1.set_xlim([-25, 220]); ax1.set_ylim([-25, 220]); ax1.set_title('Statistical model');\n", - "df_agg.plot.scatter(x='thick', y='oggm_thick', ax=ax2);\n", - "ax2.set_xlim([-25, 220]); ax2.set_ylim([-25, 220]); ax2.set_title('OGGM');\n", - "df_agg.plot.scatter(x='thick', y='millan_thick', ax=ax3);\n", - "ax3.set_xlim([-25, 220]); ax3.set_ylim([-25, 220]); ax3.set_title('Millan 2022');\n", - "df_agg.plot.scatter(x='thick', y='consensus_thick', ax=ax4);\n", + "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10))\n", + "df_agg.plot.scatter(x='thick', y='linear_model_thick', ax=ax1)\n", + "ax1.set_xlim([-25, 220]); ax1.set_ylim([-25, 220]); ax1.set_title('Statistical model')\n", + "df_agg.plot.scatter(x='thick', y='oggm_thick', ax=ax2)\n", + "ax2.set_xlim([-25, 220]); ax2.set_ylim([-25, 220]); ax2.set_title('OGGM')\n", + "df_agg.plot.scatter(x='thick', y='millan_thick', ax=ax3)\n", + "ax3.set_xlim([-25, 220]); ax3.set_ylim([-25, 220]); ax3.set_title('Millan 2022')\n", + "df_agg.plot.scatter(x='thick', y='consensus_thick', ax=ax4)\n", "ax4.set_xlim([-25, 220]); ax4.set_ylim([-25, 220]); ax4.set_title('Farinotti 2019');" ] } diff --git a/notebooks/10minutes/preprocessed_directories.ipynb b/notebooks/10minutes/preprocessed_directories.ipynb index 0b4c1937..2352bcc8 100644 --- a/notebooks/10minutes/preprocessed_directories.ipynb +++ b/notebooks/10minutes/preprocessed_directories.ipynb @@ -192,7 +192,7 @@ "- `RGI60-11.01450`: [Aletsch Glacier](https://en.wikipedia.org/wiki/Aletsch_Glacier) in the Swiss Alps\n", "\n", "Here is a list of other glaciers you might want to try out:\n", - "- `RGI60-11.00897`: [Hintereisferner](http://acinn.uibk.ac.at/research/ice-and-climate/projects/hintereisferner) in the Austrian Alps.\n", + "- `RGI60-11.00897`: [Hintereisferner](https://www.uibk.ac.at/en/acinn/research/ice-and-climate/projects/hintereisferner/) in the Austrian Alps.\n", "- `RGI60-18.02342`: [Tasman Glacier](https://en.wikipedia.org/wiki/Tasman_Glacier) in New Zealand\n", "- `RGI60-11.00787`: [Kesselwandferner](https://de.wikipedia.org/wiki/Kesselwandferner) in the Austrian Alps\n", "- ... or any other glacier identifier! You can find other glacier identifiers by exploring the [GLIMS viewer](https://www.glims.org/maps/glims). See the [working with the RGI](../tutorials/working_with_rgi.ipynb) tutorial for an introduction on RGI IDs and the GLIMS browser.\n", @@ -215,7 +215,7 @@ "\n", "To handle this situation, OGGM uses a workflow based on data persistence on disk: instead of passing data as python variables from one task to another, each task will read the data from disk and then write the computation results back to the disk, making these new data available for the next task in the queue. These glacier specific data are located in [glacier directories](https://docs.oggm.org/en/stable/generated/oggm.GlacierDirectory.html#oggm.GlacierDirectory). \n", "\n", - "One main advantage of this workflow is that OGGM can prepare data and make it available to everyone! Here is an example of an url where such data can be found:" + "One main advantage of this workflow is that OGGM can prepare data and make it available to everyone! Here is an example of a url where such data can be found:" ] }, { @@ -250,7 +250,7 @@ " rgi_ids, # which glaciers?\n", " prepro_base_url=DEFAULT_BASE_URL, # where to fetch the data?\n", " from_prepro_level=4, # what kind of data? \n", - " prepro_border=80 # how big of a map?\n", + " prepro_border=160 # how big of a map? ## TODO update back to 80 if made available\n", ")" ] }, @@ -258,7 +258,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "- the keyword `from_prepro_level` indicates that we will start from [pre-processed directories](https://docs.oggm.org/en/stable/shop.html#pre-processed-directories), i.e. data that are already prepared by the OGGM team. In many cases you will want to start from pre-processed directories, in most cases from level 3 or 5. For level 3 and above the model has already been calibrated, so you no longer need to do that yourself and can start rigth away with your simulation. Here we start from level 4 and add some data to the processing in order to demonstrate the OGGM workflow.\n", + "- the keyword `from_prepro_level` indicates that we will start from [pre-processed directories](https://docs.oggm.org/en/stable/shop.html#pre-processed-directories), i.e. data that are already prepared by the OGGM team. In many cases you will want to start from pre-processed directories, in most cases from level 3 or 5. For level 3 and above the model has already been calibrated, so you no longer need to do that yourself and can start right away with your simulation. Here we start from level 4 and add some data to the processing in order to demonstrate the OGGM workflow.\n", "- the `prepro_border` keyword indicates the number of grid points which we'd like to add to each side of the glacier for the local map: the larger the glacier will grow, the larger the border parameter should be. The available pre-processed border values are: **10, 80, 160, 240** (depending on the model set-ups there might be more or less options). These are the fixed map sizes we prepared for you - any other map size will require a full processing (see the [further DEM sources example](../tutorials/dem_sources.ipynb) for a tutorial)." ] }, @@ -384,7 +384,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Glacier directories are the central object for model users and developpers to access data for this glacier. Let's say for example that you would like to retrieve the climate data that we have prepared for you. You can ask the glacier directory to tell you where this data is:" + "Glacier directories are the central object for model users and developers to access data for this glacier. Let's say for example that you would like to retrieve the climate data that we have prepared for you. You can ask the glacier directory to tell you where this data is:" ] }, { @@ -419,8 +419,8 @@ "with xr.open_dataset(gdir.get_filepath('climate_historical')) as ds:\n", " ds = ds.load()\n", "# Plot the data\n", - "ds.temp.resample(time='YS').mean().plot(label=f'Annual temperature at {int(ds.ref_hgt)}m a.s.l.');\n", - "ds.temp.resample(time='YS').mean().rolling(time=31, center=True, min_periods=15).mean().plot(label='30yr average');\n", + "ds.temp.resample(time='YS').mean().plot(label=f'Annual temperature at {int(ds.ref_hgt)}m a.s.l.')\n", + "ds.temp.resample(time='YS').mean().rolling(time=31, center=True, min_periods=15).mean().plot(label='30yr average')\n", "plt.legend();" ] }, @@ -461,10 +461,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There are two different types of \"[tasks](http://docs.oggm.org/en/stable/api.html#entity-tasks)\" in OGGM:\n", + "There are two different types of \"[tasks](https://docs.oggm.org/en/stable/api.html#entity-tasks)\" in OGGM:\n", "\n", "**Entity Tasks**\n", - ": Standalone operations to be realized on one single glacier entity, independently from the others. The majority of OGGM tasks are entity tasks. They are parallelisable: the same task can run on several glaciers in parallel.\n", + ": Standalone operations to be realized on one single glacier entity, independently of the others. The majority of OGGM tasks are entity tasks. They are parallelisable: the same task can run on several glaciers in parallel.\n", "\n", "**Global Tasks**\n", ": Tasks which require to work on several glacier entities at the same time. Model parameter calibration or the compilation of several glaciers' output are examples of global tasks. \n", @@ -539,6 +539,7 @@ "## What's next?\n", "\n", "- visit the next tutorial: 10 minutes to... [a glacier change projection with GCM data](run_with_gcm.ipynb)\n", + "- do you want to understand how the preprocessed directories are built? Check out [this step-by-step guide](../tutorials/full_prepro_workflow.ipynb)\n", "- back to the [table of contents](../welcome.ipynb)\n", "- return to the [OGGM documentation](https://docs.oggm.org)" ] diff --git a/notebooks/10minutes/run_with_gcm.ipynb b/notebooks/10minutes/run_with_gcm.ipynb index 3f69fc8c..fb364093 100644 --- a/notebooks/10minutes/run_with_gcm.ipynb +++ b/notebooks/10minutes/run_with_gcm.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this example, we illustrate how to do a typical \"projection run\", i.e. using GCM data. Here we will first use already bias-corrected CMIP6 data from [ISIMIP3b](https://www.isimip.org/gettingstarted/isimip3b-bias-adjustment) and than show how alternatives like the original CMIP5 and CMIP6 data can be used. \n", + "In this example, we illustrate how to do a typical \"projection run\", i.e. using GCM data. Here we will first use already bias-corrected CMIP6 data from [ISIMIP3b](https://www.isimip.org/gettingstarted/isimip3b-bias-adjustment) and then show how alternatives like the original CMIP5 and CMIP6 data can be used.\n", "\n", "There are three important steps:\n", "- download the OGGM pre-processed directories containing a pre-calibrated and spun-up glacier model\n", @@ -35,7 +35,6 @@ "outputs": [], "source": [ "# Libs\n", - "import xarray as xr\n", "import matplotlib.pyplot as plt\n", "\n", "# Locals\n", @@ -108,7 +107,7 @@ "source": [ "ds = utils.compile_run_output(gdirs, input_filesuffix='_spinup_historical')\n", "vol_ref2000 = ds.volume / ds.volume.sel(time=2000) * 100\n", - "vol_ref2000.plot(hue='rgi_id');\n", + "vol_ref2000.plot(hue='rgi_id')\n", "plt.ylabel('Volume (%, reference 2000)');" ] }, @@ -116,7 +115,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Each RGI glacier has an \"inventory date\", the time at which the ouline is valid:" + "Each RGI glacier has an \"inventory date\", the time at which the outline is valid:" ] }, { @@ -148,7 +147,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "A typical use case for OGGM will be to use climate model output (here bias-corrected CMIP6 GCMs from [ISIMIP3b](https://www.isimip.org/gettingstarted/isimip3b-bias-adjustment/)). We use the files [we mirrored in Bremen](https://cluster.klima.uni-bremen.de/~oggm/cmip6/isimip3b/flat/monthly/) here, but you can use whichever you want. From ISIMIP3b, we have 5 GCMs and 3 SSPs on the cluster. You can find more information on the [ISIMIP website](https://www.isimip.org/gettingstarted/isimip3b-bias-adjustment). Let's download the data:" + "A typical use case for OGGM will be to use climate model output (here bias-corrected CMIP6 GCMs from [ISIMIP3b](https://www.isimip.org/gettingstarted/isimip3b-bias-adjustment/)). We use the files [we mirrored in Bremen](https://cluster.klima.uni-bremen.de/~oggm/cmip6/isimip3b/flat/monthly/) here, but you can use whichever you want. From ISIMIP3b, we have 14 GCMs with at least three SSPs per GCM on the cluster (check out https://cluster.klima.uni-bremen.de/~oggm/cmip6/isimip3b/flat/2025.11.25/monthly/ to see the names of the GCMs). You can find more information on the [ISIMIP3b Zenodo](https://data.isimip.org/10.48364/ISIMIP.581124.2). Let's download the data:" ] }, { @@ -159,7 +158,8 @@ }, "outputs": [], "source": [ - "# you can choose one of these 5 different GCMs:\n", + "# you can choose from in total 14 different climate models (GCMs).\n", + "# Here are for example the 5 primary GCMs:\n", "# 'gfdl-esm4_r1i1p1f1', 'mpi-esm1-2-hr_r1i1p1f1', 'mri-esm2-0_r1i1p1f1' (\"low sensitivity\" models, within typical ranges from AR6)\n", "# 'ipsl-cm6a-lr_r1i1p1f1', 'ukesm1-0-ll_r1i1p1f2' (\"hotter\" models, especially ukesm1-0-ll)\n", "member = 'mri-esm2-0_r1i1p1f1' \n", @@ -172,7 +172,7 @@ " member=member,\n", " # recognize the climate file for later\n", " output_filesuffix=f'_ISIMIP3b_{member}_{ssp}'\n", - " );" + " )" ] }, { @@ -224,7 +224,7 @@ " climate_input_filesuffix=rid, # use the chosen scenario\n", " init_model_filesuffix='_spinup_historical', # this is important! Start from 2020 glacier\n", " output_filesuffix=rid, # recognize the run for later\n", - " );" + " )" ] }, { @@ -250,8 +250,8 @@ " # Compile the output into one file\n", " ds = utils.compile_run_output(gdirs, input_filesuffix=rid)\n", " # Plot it\n", - " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=ssp, c=color_dict[ssp]);\n", - " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=ssp, c=color_dict[ssp]);\n", + " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=ssp, c=color_dict[ssp])\n", + " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=ssp, c=color_dict[ssp])\n", "plt.legend();" ] }, @@ -273,15 +273,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "ISIMIP data is very useful because it is bias corrected. Furthermore, it offers daily data (which we will soon make available in OGGM).\n", + "ISIMIP data is very useful because it is bias corrected. Furthermore, it offers daily data, which we will soon use in OGGM.\n", "\n", "But you may want a higher diversity of models or scenarios: for this, you may also use the CMIP5 or CMIP6 GCMs directly. These need to be bias-corrected first to the applied baseline climate (see [process_gcm_data](https://docs.oggm.org/en/stable/generated/oggm.tasks.process_gcm_data.html#oggm.shop.gcm_climate.process_gcm_data)). This relatively simple bias-correction is automatically done by `process_cmip_data` and is very important, as the model is very sensitive to temperature variability (see the following [blogpost](https://oggm.org/2021/08/05/mean-forcing/) for more details).\n", - "- CMIP5 has 4 different RCP scenarios and a variety of GCMs, online you can find them [here](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng). The above mentioned storage contains information about the data, [how to cite them](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/README) and [tabular summaries](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/all_gcm_table.html) of the available GCMs. \n", - "- CMIP6 has 4 different SSP scenarios, see [this table](https://cluster.klima.uni-bremen.de/~oggm/cmip6/all_gcm_table.html) for a summary of available GCMs. There are even some CMIP6 runs that go until [2300](https://cluster.klima.uni-bremen.de/~oggm/cmip6/gcm_table_2300.html).\n", + "- CMIP5 has 4 different RCP scenarios and a variety of GCMs, online you can find them [here](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng). The above-mentioned storage contains information about the data, [how to cite them](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/README) and [tabular summaries](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/all_gcm_table.html) of the available GCMs.\n", + "- CMIP6 has up to 8 different SSP scenarios, see [this table](https://cluster.klima.uni-bremen.de/~oggm/cmip6/gcm_table_2100.html) for a summary of available GCMs. There are even some CMIP6 runs that go until [2300](https://cluster.klima.uni-bremen.de/~oggm/cmip6/gcm_table_2300.html).\n", "\n", - "> Note, that the CMIP5 and CMIP6 files are much larger than the ISIMIP3b files. This is because we use a simple processing trick for the ISIMIP3b GCM files as we only save the glacier gridpoints, instead of the entire globe for CMIP5 and CMIP6.0 \n", + "> Note, that the CMIP5 and CMIP6 files are much larger than the ISIMIP3b files. This is because we use a simple processing trick for the ISIMIP3b GCM files as we only save the glacier gridpoints, instead of the entire globe for CMIP5 and CMIP6.0\n", "\n", - "**Therefore: run the following code only if it is ok to download a few gigabytes of data.** Set the variable below to true to run it. " + "**Therefore: run the following code only if it is ok to download a few gigabytes of data.** Set the variable below to true to run it.\n", + "(**Attention! This may take some time ...**)" ] }, { @@ -320,7 +321,7 @@ " filesuffix='_CMIP5_CCSM4_{}'.format(rcp), # recognize the climate file for later\n", " fpath_temp=ft, # temperature projections\n", " fpath_precip=fp, # precip projections\n", - " );\n", + " )\n", "\n", " # Run OGGM\n", " for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n", @@ -330,16 +331,16 @@ " climate_input_filesuffix=rid, # use the chosen scenario\n", " init_model_filesuffix='_historical', # this is important! Start from 2020 glacier\n", " output_filesuffix=rid, # recognize the run for later\n", - " );\n", + " )\n", "\n", " # Plot\n", " f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))\n", " for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n", " rid = '_CMIP5_CCSM4_{}'.format(rcp)\n", " ds = utils.compile_run_output(gdirs, input_filesuffix=rid)\n", - " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=rcp, c=color_dict_rcp[rcp]);\n", - " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=rcp, c=color_dict_rcp[rcp]);\n", - " plt.legend();" + " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=rcp, c=color_dict_rcp[rcp])\n", + " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=rcp, c=color_dict_rcp[rcp])\n", + " plt.legend()" ] }, { @@ -348,7 +349,7 @@ "source": [ "Now, the same for CMIP6 but instead of RCPs, now SSPs and again with another GCM:\n", "\n", - "(**Attention! This may take some time ...**) Set the variable below to true to run it." + "Set the variable below to true to run it." ] }, { @@ -385,7 +386,7 @@ " filesuffix='_CMIP6_CESM2_{}'.format(ssp), # recognize the climate file for later\n", " fpath_temp=ft, # temperature projections\n", " fpath_precip=fp, # precip projections\n", - " );\n", + " )\n", "\n", " # Run OGGM\n", " for ssp in ['ssp126', 'ssp585']:\n", @@ -395,19 +396,56 @@ " climate_input_filesuffix=rid, # use the chosen scenario\n", " init_model_filesuffix='_historical', # this is important! Start from 2020 glacier\n", " output_filesuffix=rid, # recognize the run for later\n", - " );\n", + " )\n", "\n", " # Plot\n", " f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))\n", " for ssp in ['ssp126', 'ssp585']:\n", " rid = '_CMIP6_CESM2_{}'.format(ssp)\n", " ds = utils.compile_run_output(gdirs, input_filesuffix=rid)\n", - " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=ssp, c=color_dict[ssp]);\n", - " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=ssp, c=color_dict[ssp]);\n", + " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=ssp, c=color_dict[ssp])\n", + " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=ssp, c=color_dict[ssp])\n", "\n", - " plt.legend();" + " plt.legend()" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Have 5 minutes more? Do projections with another preprocessed glacier directory\n", + "\n", + "If you use the default preprocessed glacier directory (`DEFAULT_BASE_URL`), you do the same as in the [OGGM standard projections](https://docs.oggm.org/en/stable/download-projections.html). Per-glacier, regional, or global projections with this standard option are available directly at the [OGGM/oggm-standard-projections-csv-files repository](https://github.com/OGGM/oggm-standard-projections-csv-files).\n", + "\n", + "You can also do projections with another preprocessed glacier directory! We have several options of [preprocessed glacier directories available](https://docs.oggm.org/en/stable/shop.html#available-pre-processed-configurations).\n", + "If you want to e.g. use ERA5 instead of W5E5, you just have to update one of the lines above to\n", + "```python\n", + "new_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/ERA5/per_glacier_spinup/'\n", + "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=new_url)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And then you can rerun all the cells below that line! Note that our processed ERA5 data (and thus, the historical runs) go until the end of 2025, and not just until the end of 2019." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/notebooks/tutorials/building_the_prepro_gdirs.ipynb b/notebooks/tutorials/building_the_prepro_gdirs.ipynb index 18644ef0..7e161d34 100644 --- a/notebooks/tutorials/building_the_prepro_gdirs.ipynb +++ b/notebooks/tutorials/building_the_prepro_gdirs.ipynb @@ -76,7 +76,7 @@ }, "outputs": [], "source": [ - "# we always need to initialzie and define a working directory\n", + "# we always need to initialise and define a working directory\n", "cfg.initialize(logging_level='WARNING')\n", "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-full_prepro_elevation_bands', reset=True)" ] @@ -104,7 +104,7 @@ }, "outputs": [], "source": [ - "# This section is only for future developments of the tutorial (e.g. updateing for new OGGM releases)\n", + "# This section is only for future developments of the tutorial (e.g. updating for new OGGM releases)\n", "# Test if prepro_base_url valid for both flowline_type_to_use, see level 2.\n", "# In total four complete executions of the notebook:\n", "# (load_from_prepro_base_url=False/True and flowline_type_to_use = 'elevation_band'/'centerline')\n", @@ -177,7 +177,7 @@ "# Instruction for beginning with existing OGGM's preprocessed directories\n", "if load_from_prepro_base_url:\n", " # to start from level 0 you can do\n", - " prepro_base_url_L0 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/elev_bands/'\n", + " prepro_base_url_L0 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/2025.6/elev_bands/'\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", " from_prepro_level=0,\n", " prepro_base_url=prepro_base_url_L0,\n", @@ -255,7 +255,7 @@ "# Instruction for beginning with existing OGGM's preprocessed directories\n", "if load_from_prepro_base_url:\n", " # to start from level 1 you can do\n", - " prepro_base_url_L1 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/elev_bands/'\n", + " prepro_base_url_L1 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/2025.6/elev_bands/'\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", " from_prepro_level=1,\n", " prepro_base_url=prepro_base_url_L1,\n", @@ -324,7 +324,7 @@ " workflow.execute_entity_task(task, gdirs);\n", "\n", "elif flowline_type_to_use == 'centerline':\n", - " # for centerline we can use parabola downstream line\n", + " # for centerlines we can use parabola downstream line\n", " cfg.PARAMS['downstream_line_shape'] = 'parabola'\n", "\n", " centerline_task_list = [\n", @@ -359,9 +359,9 @@ "if load_from_prepro_base_url:\n", " # to start from level 2 we need to distinguish between the flowline types\n", " if flowline_type_to_use == 'elevation_band':\n", - " prepro_base_url_L2 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/2023.2/elev_bands_w_data/'\n", + " prepro_base_url_L2 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/2025.6/elev_bands_w_data/'\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L2 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/centerlines/'\n", + " prepro_base_url_L2 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L1-L2_files/2025.6/centerlines/'\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", "\n", @@ -437,7 +437,7 @@ "workflow.calibrate_inversion_from_consensus(\n", " gdirs,\n", " apply_fs_on_mismatch=True,\n", - " error_on_mismatch=True, # if you running many glaciers some might not work\n", + " error_on_mismatch=True, # if you are running many glaciers some might not work\n", " filter_inversion_output=True, # this partly filters the overdeepening due to\n", " # the equilibrium assumption for retreating glaciers (see. Figure 5 of Maussion et al. 2019)\n", " volume_m3_reference=None, # here you could provide your own total volume estimate in m3\n", @@ -501,9 +501,9 @@ "if load_from_prepro_base_url:\n", " # to start from level 3 you can do\n", " if flowline_type_to_use == 'elevation_band':\n", - " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", "\n", @@ -566,8 +566,8 @@ "minimise_for = 'area' # other option would be 'volume'\n", "workflow.execute_entity_task(\n", " tasks.run_dynamic_melt_f_calibration, gdirs,\n", - " err_dmdtda_scaling_factor=0.2, # by default we reduce the mass balance error for accounting for\n", - " # corrleated uncertainties on a regional scale\n", + " err_dmdtda_scaling_factor=0.2, # by default, we reduce the mass balance error for accounting for\n", + " # correlated uncertainties on a regional scale\n", " ys=dynamic_spinup_start_year, ye=ye,\n", " kwargs_run_function={'minimise_for': minimise_for},\n", " ignore_errors=True,\n", @@ -591,7 +591,7 @@ " if flowline_type_to_use == 'elevation_band':\n", " prepro_base_url_L4 = DEFAULT_BASE_URL\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L4 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + " prepro_base_url_L4 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", @@ -661,7 +661,7 @@ " if flowline_type_to_use == 'elevation_band':\n", " prepro_base_url_L5 = DEFAULT_BASE_URL\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L5 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + " prepro_base_url_L5 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", diff --git a/notebooks/tutorials/centerlines_to_shape.ipynb b/notebooks/tutorials/centerlines_to_shape.ipynb index c4fd8b57..4a17863e 100644 --- a/notebooks/tutorials/centerlines_to_shape.ipynb +++ b/notebooks/tutorials/centerlines_to_shape.ipynb @@ -268,7 +268,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "`LE_SEGMENT` is the length of the centerline in meters. The RGI \"IDs\" are fake (OGGM needs them) but the breID are real. Lets use them as index for the file:" + "`LE_SEGMENT` is the length of the centerline in meters. The RGI \"IDs\" are fake (OGGM needs them) but the breID are real. Let's use them as index for the file:" ] }, { @@ -376,7 +376,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "While the centerline algorithm is quite robust, the results will vary as a function of the resolution of the underlying grid, and the smoothing options. After trying a little, it seems difficult to find a setting which works \"best\" in all circumstances, and we encourage users to try several options and see what they prefer. The option likely to have the most impact (assuming smoothing with `(0.5, 5)` is the underlying grid resolution." + "While the centerline algorithm is quite robust, the results will vary as a function of the resolution of the underlying grid, and the smoothing options. After trying a little, it seems difficult to find a setting which works \"best\" in all circumstances, and we encourage users to try several options and see what they prefer. The option likely to have the most impact (assuming smoothing with `(0.5, 5)`) is the underlying grid resolution." ] }, { diff --git a/notebooks/tutorials/deal_with_errors.ipynb b/notebooks/tutorials/deal_with_errors.ipynb index 0ce01446..0c47d224 100644 --- a/notebooks/tutorials/deal_with_errors.ipynb +++ b/notebooks/tutorials/deal_with_errors.ipynb @@ -13,7 +13,7 @@ "source": [ "In this example, we run the model on a list of three glaciers:\n", "two of them will end with errors: one because it already failed at\n", - "preprocessing (i.e. prior to this run), and one during the run. We show how to analyze theses erros and solve (some) of them, as described in the OGGM documentation under [troubleshooting](https://docs.oggm.org/en/stable/faq.html?highlight=border#troubleshooting)." + "preprocessing (i.e. prior to this run), and one during the run. We show how to analyze these errors and solve (some) of them, as described in the OGGM documentation under [troubleshooting](https://docs.oggm.org/en/stable/faq.html?highlight=border#troubleshooting)." ] }, { @@ -53,7 +53,7 @@ "cfg.PARAMS['use_multiprocessing'] = True\n", "\n", "# This is the important bit!\n", - "# We tell OGGM to continue despite of errors\n", + "# We tell OGGM to continue despite errors\n", "cfg.PARAMS['continue_on_error'] = True\n", "\n", "# Local working directory (where OGGM will write its output)\n", @@ -235,7 +235,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This error message in the log is misleading: it does not really describe the source of the error, which happened earlier in the processing chain. Therefore we can look instead into the glacier_statistics via [compile_glacier_statistics](https://docs.oggm.org/en/stable/generated/oggm.utils.compile_glacier_statistics.html) or into the log output via [compile_task_log](https://docs.oggm.org/en/stable/generated/oggm.utils.compile_task_log.html#oggm.utils.compile_task_log):" + "This error message in the log is misleading: it does not really describe the source of the error, which happened earlier in the processing chain. Therefore, we can look instead into the glacier_statistics via [compile_glacier_statistics](https://docs.oggm.org/en/stable/generated/oggm.utils.compile_glacier_statistics.html) or into the log output via [compile_task_log](https://docs.oggm.org/en/stable/generated/oggm.utils.compile_task_log.html#oggm.utils.compile_task_log):" ] }, { diff --git a/notebooks/tutorials/dem_sources.ipynb b/notebooks/tutorials/dem_sources.ipynb index f2e461b7..dd3a9715 100644 --- a/notebooks/tutorials/dem_sources.ipynb +++ b/notebooks/tutorials/dem_sources.ipynb @@ -72,7 +72,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "If not specifying anything, OGGM will use it's default settings, i.e. NASADEM for mid- and low-latitudes (60°S-60°N). However, this needs registration at [NASA Earthdata](https://urs.earthdata.nasa.gov/) (see \"Register\" below). Here, we choose the **SRTM** source as example DEM (no registration necessary)." + "If not specifying anything, OGGM will use its default settings, i.e. NASADEM for mid- and low-latitudes (60°S-60°N). However, this needs registration at [NASA Earthdata](https://urs.earthdata.nasa.gov/) (see \"Register\" below). Here, we choose the **SRTM** source as example DEM (no registration necessary)." ] }, { @@ -109,7 +109,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It is a geotiff file. [Xarray](http://xarray.pydata.org) can open them thanks to [rasterio](https://rasterio.readthedocs.io):" + "It is a geotiff file. [Xarray](https://xarray.pydata.org) can open them thanks to [rasterio](https://rasterio.readthedocs.io):" ] }, { @@ -146,7 +146,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**OGGM is neither the owner nor the distributer of these datasets! OGGM only provides tools to access it. It is your responsibility as the data user to read the individual usage requirements and cite and acknowledge the original data sources accordingly.**" + "**OGGM is neither the owner nor the distributor of these datasets! OGGM only provides tools to access it. It is your responsibility as the data user to read the individual usage requirements and cite and acknowledge the original data sources accordingly.**" ] }, { @@ -194,7 +194,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The [RGI-TOPO](https://rgitools.readthedocs.io/en/latest/dems.html) dataset is an RGI-provided dataset in beta release. These data are available for everyone, and were created with OGGM. Of course you can easily use these data in OGGM as well:" + "The [RGI-TOPO](https://rgitools.readthedocs.io/en/latest/dems.html) dataset is an RGI-provided dataset in beta release. These data are available for everyone, and were created with OGGM. Of course, you can easily use these data in OGGM as well:" ] }, { @@ -390,7 +390,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## The border value, or how to chose the size of the topographic map" + "## The border value, or how to choose the size of the topographic map" ] }, { @@ -399,7 +399,7 @@ "source": [ "It is possible to specify the extent of the local topographic map. All maps are centered on the glacier and the size of the map is determined in grid points around the glacier. The number of grid points that was used in this example are 10 in order to save storage. But depending on your study you might need a larger topographic map. \n", "\n", - "OGGM's [pre-processed directories](https://docs.oggm.org/en/stable/input-data.html#pre-processed-directories) come in 4 border sizes: 10, 40, 80 and 160. But if you process the topography yourself you can chose every value." + "OGGM's [pre-processed directories](https://docs.oggm.org/en/stable/input-data.html#pre-processed-directories) come in 4 border sizes: 10, 40, 80 and 160. But if you process the topography yourself you can choose every value." ] }, { diff --git a/notebooks/tutorials/distribute_flowline.ipynb b/notebooks/tutorials/distribute_flowline.ipynb index dd6db05d..e48622a1 100644 --- a/notebooks/tutorials/distribute_flowline.ipynb +++ b/notebooks/tutorials/distribute_flowline.ipynb @@ -195,7 +195,7 @@ }, "outputs": [], "source": [ - "# Inititial glacier thickness\n", + "# Initial glacier thickness\n", "f, ax = plt.subplots()\n", "ds.distributed_thickness.plot(ax=ax);\n", "ax.axis('equal');" @@ -492,7 +492,7 @@ " output_filesuffix='_random_s1_smooth', # do not overwrite the previous file (optional) \n", " # add_monthly=True, # more frames! (12 times more - we comment for the demo, but recommend it)\n", " rolling_mean_smoothing=7, # smooth the area time series\n", - " fl_thickness_threshold=1, # avoid snow patches to be nisclassified\n", + " fl_thickness_threshold=1, # avoid snow patches to be misclassified\n", " )" ] }, @@ -605,8 +605,8 @@ "distribute_2d.merge_simulated_thickness(\n", " gdirs, # the gdirs we want to merge\n", " simulation_filesuffix=simulation_filesuffix, # the name of the simulation\n", - " years_to_merge=np.arange(2005, 2101, 5), # for demonstration I only pick some years, if this is None all years are merged\n", - " add_topography=True, # if you do not need topogrpahy setting this to False will decrease computing time\n", + " years_to_merge=np.arange(2005, 2101, 5), # for demonstration, I only pick some years, if this is None all years are merged\n", + " add_topography=True, # if you do not need topography setting this to False will decrease computing time\n", " preserve_totals=True, # preserve individual glacier volumes during merging\n", " reset=True,\n", ")" diff --git a/notebooks/tutorials/dynamical_spinup.ipynb b/notebooks/tutorials/dynamical_spinup.ipynb index 95d91fb1..3fd9f2a2 100644 --- a/notebooks/tutorials/dynamical_spinup.ipynb +++ b/notebooks/tutorials/dynamical_spinup.ipynb @@ -19,12 +19,12 @@ "\n", "However, running simulations in the recent past can be quite useful for model validation. Also, more direct observations are available of glacier states of the recent past for constriction (e.g. area, geodetic mass balance). Further, a dynamical initialisation can release strong assumptions in the OGGM default settings: first that glaciers are in dynamical equilibrium at the glacier outline date (an assumption required for the ice thickness inversion) and second that the mass balance melt factor parameter (*melt_f*) is calibrated towards a geodetic mass balance ignoring a dynamically changing glacier geometry.\n", "\n", - "In recent PRs ([GH1342](https://github.com/OGGM/oggm/pull/1342), [GH1232](https://github.com/OGGM/oggm/pull/1232), [GH1361](https://github.com/OGGM/oggm/pull/1361) and [GH1425](https://github.com/OGGM/oggm/pull/1425)) we have released two new run tasks in OGGM which help with this issues:\n", + "In recent PRs ([GH1342](https://github.com/OGGM/oggm/pull/1342), [GH1232](https://github.com/OGGM/oggm/pull/1232), [GH1361](https://github.com/OGGM/oggm/pull/1361) and [GH1425](https://github.com/OGGM/oggm/pull/1425)) we have released two new run tasks in OGGM which help with this issue:\n", "\n", "- The ```run_dynamic_spinup``` task, by default, aims to find a glacier state before the RGI-date (~10-30 years back) from which the glacier evolves to match the area given by the RGI-outline. Alternatively, it is also possible to use this task to match an observed volume.\n", "- The ```run_dynamic_melt_f_calibration``` task iteratively searches for a *melt_f* to match the observed geodetic mass balance taking a dynamically changing glacier geometry into account.\n", "\n", - "And of course, we want to match both things in the same past model run. Therefore by default in each iteration of ```run_dynamic_melt_f_calibration``` the ```run_dynamic_spinup``` function is included. A more in-depth explanation of the two tasks is provided in the next two chapters, which are followed by an example and a comparison of the different spinup options. \n", + "And of course, we want to match both things in the same past model run. Therefore, by default in each iteration of ```run_dynamic_melt_f_calibration``` the ```run_dynamic_spinup``` function is included. A more in-depth explanation of the two tasks is provided in the next two chapters, which are followed by an example and a comparison of the different spinup options.\n", "\n", "## High-level explanation of ```run_dynamic_spinup```:\n", "\n", @@ -54,7 +54,7 @@ "\n", "## High-level explanation of ```run_dynamic_melt_f_calibration ```:\n", "\n", - "This task iteratively searches for a *melt_f* to match a given geodetic mass balance incorporating a dynamic model run. But changing *melt_f* means we need to rerun all model setup steps which incorporate the mass-balance, to have one consistent model initialisation chain. In particular, we need to conduct the bed inversion again (the mass-balance is used in the flux calculation, see [here](https://docs.oggm.org/en/latest/inversion.html#ice-flux)). Therefore one default iteration of the dynamic *melt_f* calibration looks like this:\n", + "This task iteratively searches for a *melt_f* to match a given geodetic mass balance incorporating a dynamic model run. But changing *melt_f* means we need to rerun all model setup steps which incorporate the mass-balance, to have one consistent model initialisation chain. In particular, we need to conduct the bed inversion again (the mass-balance is used in the flux calculation, see [here](https://docs.oggm.org/en/latest/inversion.html#ice-flux)). Therefore, one default iteration of the dynamic *melt_f* calibration looks like this:\n", "\n", "- define a new *melt_f* in the glacier directory\n", "- conduct an inversion which calibrates to the consensus volume, still assuming dynamic equilibrium. (a little tricky: by default, before we start with the dynamic *melt_f* calibration the first inversion with calibration on a regional scale was already carried out -> the individual glaciers do not match the consensus volume exactly, but when adding all glaciers of one region the consensus volume is matched. Therefore during this task, the individual glacier is again matched to the volume of the regional assessment and not on an individual basis.)\n", @@ -66,7 +66,7 @@ "\n", "If the iterative search is not successful and ```ignore_errors = True``` there are several possible outcomes:\n", "\n", - "- First, it is checked if there were some successful runs which improved the mismatch. If so, the best run is saved and it is indicated in the diagnostics with ```used_spinup_option = dynamic melt_f calibration (part success)```\n", + "- First, it is checked if there were some successful runs which improved the mismatch. If so, the best run is saved, and it is indicated in the diagnostics with ```used_spinup_option = dynamic melt_f calibration (part success)```\n", "- If only the first guess worked this run is saved and indicated in the diagnostics with ```used_spinup_option = dynamic spinup only```\n", "- And if everything failed a fixed geometry spinup is conducted and indicated in the diagnostics with ```used_spinup_option = fixed geometry spinup```\n", "\n", @@ -89,7 +89,6 @@ }, "outputs": [], "source": [ - "import numpy as np\n", "from scipy import interpolate\n", "\n", "def minimisation_algorithm(\n", @@ -168,7 +167,6 @@ "import matplotlib.pyplot as plt\n", "import xarray as xr\n", "import numpy as np\n", - "import pandas as pd\n", "import seaborn as sns" ] }, @@ -189,7 +187,7 @@ "metadata": {}, "outputs": [], "source": [ - "from oggm import cfg, utils, workflow, tasks, graphics" + "from oggm import cfg, utils, workflow, tasks" ] }, { @@ -239,9 +237,9 @@ "metadata": {}, "outputs": [], "source": [ - "# We use a recent gdir setting, calibated on a glacier per glacier basis\n", + "# We use a recent gdir setting, calibrated on a glacier per glacier basis\n", "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.3/elev_bands/W5E5/')" + " 'L3-L5_files/2025.6/elev_bands/W5E5/per_glacier')" ] }, { @@ -251,7 +249,8 @@ "outputs": [], "source": [ "# We use a relatively large border value to allow the glacier to grow during spinup\n", - "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=160, prepro_base_url=base_url)" + "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, #todo <-- does that work, or do we need 160?\n", + " prepro_base_url=base_url)" ] }, { @@ -308,9 +307,9 @@ "source": [ "# ---- First do the fixed geometry spinup ----\n", "tasks.run_from_climate_data(gdir,\n", - " fixed_geometry_spinup_yr=spinup_start_yr, # Start the run at the RGI date but retro-actively correct the data with fixed geometry\n", + " fixed_geometry_spinup_yr=spinup_start_yr, # Start the run at the RGI date but retroactively correct the data with fixed geometry\n", " output_filesuffix='_hist_fixed_geom', # where to write the output\n", - " );\n", + " )\n", "# Read the output\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_hist_fixed_geom')) as ds:\n", " ds_hist = ds.load()\n", @@ -318,10 +317,10 @@ "# ---- Second the dynamic spinup alone, matching area ----\n", "tasks.run_dynamic_spinup(gdir,\n", " spinup_start_yr=spinup_start_yr, # When to start the spinup\n", - " minimise_for='area', # what target to match at the RGI date\n", + " minimise_for='area', # which target to match at the RGI date\n", " output_filesuffix='_spinup_dynamic_area', # Where to write the output\n", " ye=2020, # When the simulation should stop\n", - " );\n", + " )\n", "# Read the output\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_spinup_dynamic_area')) as ds:\n", " ds_dynamic_spinup_area = ds.load()\n", @@ -329,10 +328,10 @@ "# ---- Third the dynamic spinup alone, matching volume ----\n", "tasks.run_dynamic_spinup(gdir,\n", " spinup_start_yr=spinup_start_yr, # When to start the spinup\n", - " minimise_for='volume', # what target to match at the RGI date\n", + " minimise_for='volume', # which target to match at the RGI date\n", " output_filesuffix='_spinup_dynamic_volume', # Where to write the output\n", " ye=2020, # When the simulation should stop\n", - " );\n", + " )\n", "# Read the output\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_spinup_dynamic_volume')) as ds:\n", " ds_dynamic_spinup_volume = ds.load()\n", @@ -342,7 +341,7 @@ " ys=spinup_start_yr, # When to start the spinup\n", " ye=2020, # When the simulation should stop\n", " output_filesuffix='_dynamic_melt_f', # Where to write the output\n", - " );\n", + " )\n", "\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_dynamic_melt_f')) as ds:\n", " ds_dynamic_melt_f = ds.load()" @@ -359,32 +358,32 @@ "\n", "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", "\n", - "ds_hist.volume_m3.plot(ax=ax1, label='Fixed geometry spinup');\n", - "ds_dynamic_melt_f.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration');\n", - "ds_dynamic_spinup_area.volume_m3.plot(ax=ax1, label='Dynamical spinup match area');\n", - "ds_dynamic_spinup_volume.volume_m3.plot(ax=ax1, label='Dynamical spinup match volume');\n", - "ax1.set_title('Volume');\n", - "ax1.scatter(y0, volume_reference, c='C3', label='Reference values');\n", - "ax1.legend();\n", - "\n", - "ds_hist.area_m2.plot(ax=ax2);\n", - "ds_dynamic_melt_f.area_m2.plot(ax=ax2);\n", - "ds_dynamic_spinup_area.area_m2.plot(ax=ax2);\n", - "ds_dynamic_spinup_volume.area_m2.plot(ax=ax2);\n", - "ax2.set_title('Area');\n", + "ds_hist.volume_m3.plot(ax=ax1, label='Fixed geometry spinup')\n", + "ds_dynamic_melt_f.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration')\n", + "ds_dynamic_spinup_area.volume_m3.plot(ax=ax1, label='Dynamical spinup match area')\n", + "ds_dynamic_spinup_volume.volume_m3.plot(ax=ax1, label='Dynamical spinup match volume')\n", + "ax1.set_title('Volume')\n", + "ax1.scatter(y0, volume_reference, c='C3', label='Reference values')\n", + "ax1.legend()\n", + "\n", + "ds_hist.area_m2.plot(ax=ax2)\n", + "ds_dynamic_melt_f.area_m2.plot(ax=ax2)\n", + "ds_dynamic_spinup_area.area_m2.plot(ax=ax2)\n", + "ds_dynamic_spinup_volume.area_m2.plot(ax=ax2)\n", + "ax2.set_title('Area')\n", "ax2.scatter(y0, area_reference, c='C3')\n", "\n", - "ds_hist.length_m.plot(ax=ax3);\n", - "ds_dynamic_melt_f.length_m.plot(ax=ax3);\n", - "ds_dynamic_spinup_area.length_m.plot(ax=ax3);\n", - "ds_dynamic_spinup_volume.length_m.plot(ax=ax3);\n", - "ax3.set_title('Length');\n", + "ds_hist.length_m.plot(ax=ax3)\n", + "ds_dynamic_melt_f.length_m.plot(ax=ax3)\n", + "ds_dynamic_spinup_area.length_m.plot(ax=ax3)\n", + "ds_dynamic_spinup_volume.length_m.plot(ax=ax3)\n", + "ax3.set_title('Length')\n", "ax3.scatter(y0, ds_hist.sel(time=y0).length_m, c='C3')\n", "\n", "plt.tight_layout()\n", - "plt.show();\n", + "plt.show()\n", "\n", - "# and print out the modeled geodetic mass balances for comparision\n", + "# and print out the modeled geodetic mass balances for comparison\n", "def get_dmdtda(ds):\n", " yr0_ref_mb, yr1_ref_mb = ref_period.split('_')\n", " yr0_ref_mb = int(yr0_ref_mb.split('-')[0])\n", @@ -423,7 +422,7 @@ "outputs": [], "source": [ "# define an artificial error for dmdtda\n", - "dmdtda_reference_error_artificial = 10 # error must be given as a positive number\n", + "dmdtda_reference_error_artificial = 15 # error must be given as a positive number\n", "\n", "tasks.run_dynamic_melt_f_calibration(gdir,\n", " ys=spinup_start_yr, # When to start the spinup\n", @@ -556,7 +555,7 @@ "source": [ "# ---- First do the fixed geometry spinup ----\n", "tasks.run_from_climate_data(gdir,\n", - " fixed_geometry_spinup_yr=spinup_start_yr, # Start the run at the RGI date but retro-actively correct the data with fixed geometry\n", + " fixed_geometry_spinup_yr=spinup_start_yr, # Start the run at the RGI date but retroactively correct the data with fixed geometry\n", " output_filesuffix='_hist_fixed_geom', # where to write the output\n", " );\n", "# Read the output\n", @@ -567,7 +566,7 @@ "tasks.run_dynamic_spinup(gdir,\n", " precision_percent=3, # For this glacier we only try to be within 3% or RGI_area\n", " spinup_start_yr=spinup_start_yr, # When to start the spinup\n", - " minimise_for='area', # what target to match at the RGI date\n", + " minimise_for='area', # which target to match at the RGI date\n", " output_filesuffix='_spinup_dynamic_area', # Where to write the output\n", " ye=2020, # When the simulation should stop\n", " );\n", @@ -578,7 +577,7 @@ "# ---- Third the dynamic spinup alone, matching volume ----\n", "tasks.run_dynamic_spinup(gdir,\n", " spinup_start_yr=spinup_start_yr, # When to start the spinup\n", - " minimise_for='volume', # what target to match at the RGI date\n", + " minimise_for='volume', # which target to match at the RGI date\n", " output_filesuffix='_spinup_dynamic_volume', # Where to write the output\n", " ye=2020, # When the simulation should stop\n", " );\n", @@ -642,7 +641,7 @@ "source": [ "In this example, you see that the dynamic spinup run matching area does not start in 1980. The reason for this are the two main problems and the coping strategy of reducing the spinup time, described [here](#Two-main-problems-why-the-dynamic-spinup-could-not-work:) in more detail.\n", "\n", - "To get an glacier evolution starting at 1980 you can use ```add_fixed_geometry_spinup = True```:" + "To get a glacier evolution starting at 1980 you can use ```add_fixed_geometry_spinup = True```:" ] }, { @@ -654,10 +653,10 @@ "tasks.run_dynamic_spinup(gdir,\n", " precision_percent=3, # For this glacier we only try to be within 3% or RGI_area\n", " spinup_start_yr=spinup_start_yr, # When to start the spinup\n", - " minimise_for='area', # what target to match at the RGI date\n", + " minimise_for='area', # which target to match at the RGI date\n", " output_filesuffix='_spinup_dynamic_area', # Where to write the output\n", " ye=2020, # When the simulation should stop\n", - " add_fixed_geometry_spinup=True, # add a fixed geometry spinup if period needs to be shortent\n", + " add_fixed_geometry_spinup=True, # add a fixed geometry spinup if period needs to be shorter\n", " );\n", "\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_spinup_dynamic_area')) as ds:\n", @@ -706,13 +705,13 @@ " store_monthly_hydro=True, # compute monthly hydro diagnostics\n", " ref_area_from_y0=True, # Even if the glacier may grow, keep the reference area as the year 0 of the simulation\n", " output_filesuffix='_spinup_dynamic_hydro', # Where to write the output - this is needed to stitch the runs together afterwards\n", - " );\n", + " )\n", "\n", "# Read the output\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_spinup_dynamic_hydro')) as ds:\n", " ds_dynamic_spinup_hydro = ds.load()\n", "\n", - "ds_dynamic_spinup_hydro = ds_dynamic_spinup_hydro.isel(time=slice(0, -1)) # The last timestep is incomplete for hydro (not started)" + "ds_dynamic_spinup_hydro = ds_dynamic_spinup_hydro.isel(time=slice(0, -1)); # The last timestep is incomplete for hydro (not started)" ] }, { @@ -774,7 +773,7 @@ "metadata": {}, "outputs": [], "source": [ - "f, ax = plt.subplots(figsize=(10, 6));\n", + "f, ax = plt.subplots(figsize=(10, 6))\n", "df_runoff.plot.area(ax=ax, color=sns.color_palette(\"rocket\")); plt.xlabel('Years'); plt.ylabel('Runoff (Mt)'); plt.title(rgi_ids[0]);" ] }, @@ -812,6 +811,11 @@ "monthly_runoff.clip(0).plot(cmap='Blues', cbar_kwargs={'label':'Mt'}); plt.xlabel('Months'); plt.ylabel('Years'); plt.title(rgi_ids[0]);" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb b/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb index 0063670c..41a784dc 100644 --- a/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb +++ b/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb @@ -135,7 +135,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Glacier length and cross section" + "## Glacier length and cross-section" ] }, { @@ -227,7 +227,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "For the ice dynamics simulations, the commands are exactly the same as well. The only difference is that centerlines require the more flexible \"FluxBased\" numerical model, while the elevation bands can also use the more robust \"SemiImplicit\" one. **The runs are considerabily faster with the elevation bands flowlines.**" + "For the ice dynamics simulations, the commands are exactly the same as well. The only difference is that centerlines require the more flexible \"FluxBased\" numerical model, while the elevation bands can also use the more robust \"SemiImplicit\" one. **The runs are considerably faster with the elevation bands flowlines.**" ] }, { @@ -332,7 +332,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Both models can be reprensented with a cross-section, like this: " + "Both models can be represented with a cross-section, like this:" ] }, { @@ -408,7 +408,7 @@ "source": [ "- in the absence of additional data to better calibrate the mass balance model, using multiple centerlines is considered not useful: indeed, the distributed representation offers little advantages if the mass balance is only a function of elevation.\n", "- elevation band flowlines are now the default of most OGGM applications. It is faster, much cheaper, and more robust to use these simplified glaciers.\n", - "- elevation band flowlines cannot be represented on a map \"out of the box\". We have however developped a tool to display the changes by redistributing them on a map: have a look at [this tutorial](../tutorials/distribute_flowline.ipynb)!\n", + "- elevation band flowlines cannot be represented on a map \"out of the box\". We have however developed a tool to display the changes by redistributing them on a map: have a look at [this tutorial](../tutorials/distribute_flowline.ipynb)!\n", "- multiple centerlines can be useful for growing glacier cases and use cases where geometry plays an important role (e.g. lakes, paleo applications)." ] }, diff --git a/notebooks/tutorials/full_prepro_workflow.ipynb b/notebooks/tutorials/full_prepro_workflow.ipynb index c616c126..b839f48e 100644 --- a/notebooks/tutorials/full_prepro_workflow.ipynb +++ b/notebooks/tutorials/full_prepro_workflow.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The OGGM workflow is best explained with an example. In the following, we will show how to apply the standard [OGGM workflow](http://docs.oggm.org/en/stable/introduction.html) to a list of glaciers. This example is meant to guide you through a first-time setup step-by-step. If you prefer not to install OGGM on your computer, you can always run this notebook in [OGGM-Edu](https://edu.oggm.org) instead!" + "The OGGM workflow is best explained with an example. In the following, we will show how to apply the standard [OGGM workflow](https://docs.oggm.org/en/stable/introduction.html) to a list of glaciers. This example is meant to guide you through a first-time setup step-by-step. If you prefer not to install OGGM on your computer, you can always run this notebook in [OGGM-Edu](https://edu.oggm.org) instead!" ] }, { @@ -159,7 +159,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/john/OGGM_output`. The size of this directory will depend on how many glaciers you'll simulate!\n", + "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/john/OGGM_output`). The size of this directory will depend on how many glaciers you'll simulate!\n", "\n", "**This working directory is meant to be persistent**, i.e. you can stop your processing workflow after any task, and restart from an existing working directory at a later stage.\n", "\n", @@ -192,7 +192,7 @@ "Here is a list of other glaciers you might want to try out:\n", "- `RGI60-18.02342`: Tasman Glacier in New Zealand\n", "- `RGI60-11.00787`: [Kesselwandferner](https://de.wikipedia.org/wiki/Kesselwandferner) in the Austrian Alps\n", - "- `RGI60-11.00897`: [Hintereisferner](http://acinn.uibk.ac.at/research/ice-and-climate/projects/hintereisferner) in the Austrian Alps.\n", + "- `RGI60-11.00897`: [Hintereisferner](https://acinn.uibk.ac.at/research/ice-and-climate/projects/hintereisferner) in the Austrian Alps.\n", "- ... or any other glacier identifier! You can find other glacier identifiers by exploring the [GLIMS viewer](https://www.glims.org/maps/glims). See the [working with the RGI](working_with_rgi.ipynb) tutorial for an introduction on RGI IDs and the GLIMS browser.\n", "\n", "For an operational run on an RGI region, you might want to download the [Randolph Glacier Inventory](https://www.glims.org/RGI/) dataset instead, and start a run from it. This case is covered in the [working with the RGI](working_with_rgi.ipynb) tutorial." @@ -345,11 +345,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There are two different types of \"[tasks](http://docs.oggm.org/en/stable/api.html#entity-tasks)\":\n", + "There are two different types of \"[tasks](https://docs.oggm.org/en/stable/api.html#entity-tasks)\":\n", "\n", "**Entity Tasks**:\n", " Standalone operations to be realized on one single glacier entity,\n", - " independently from the others. The majority of OGGM\n", + " independent of the others. The majority of OGGM\n", " tasks are entity tasks. They are parallelisable: the same task can run on \n", " several glaciers in parallel.\n", "\n", @@ -385,7 +385,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The task we just applied to our list of glaciers is [glacier_masks](http://docs.oggm.org/en/stable/generated/oggm.tasks.glacier_masks.html#oggm.tasks.glacier_masks). It wrote a new file in our glacier directory, providing raster masks of the glacier (among other things): " + "The task we just applied to our list of glaciers is [glacier_masks](https://docs.oggm.org/en/stable/generated/oggm.tasks.glacier_masks.html#oggm.tasks.glacier_masks). It wrote a new file in our glacier directory, providing raster masks of the glacier (among other things):" ] }, { @@ -401,7 +401,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It is also possible to apply several tasks sequentially (i.e. one after an other) on our glacier list:" + "It is also possible to apply several tasks sequentially (i.e. one after another) on our glacier list:" ] }, { @@ -541,7 +541,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "With the computed mass-balance and the flowlines, OGGM can now compute the ice thickness, based on the principles of [mass conservation and ice dynamics](http://docs.oggm.org/en/stable/inversion.html). " + "With the computed mass-balance and the flowlines, OGGM can now compute the ice thickness, based on the principles of [mass conservation and ice dynamics](https://docs.oggm.org/en/stable/inversion.html)." ] }, { @@ -646,7 +646,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's start a run driven by a the climate of the last 31 years, shuffled randomly for 200 years. This can be seen as a \"commitment\" simulation, i.e. how much glaciers will change even without further climate change:" + "Let's start a run driven by the climate of the last 31 years, shuffled randomly for 200 years. This can be seen as a \"commitment\" simulation, i.e. how much glaciers will change even without further climate change:" ] }, { @@ -699,7 +699,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We opened the file with [xarray](http://xarray.pydata.org), a very useful data analysis library based on [pandas](http://pandas.pydata.org/). For example, we can plot the volume and length evolution of both glaciers with time:" + "We opened the file with [xarray](https://xarray.pydata.org), a very useful data analysis library based on [pandas](https://pandas.pydata.org/). For example, we can plot the volume and length evolution of both glaciers with time:" ] }, { diff --git a/notebooks/tutorials/holoviz_intro.ipynb b/notebooks/tutorials/holoviz_intro.ipynb index d6bf3fd0..82601696 100644 --- a/notebooks/tutorials/holoviz_intro.ipynb +++ b/notebooks/tutorials/holoviz_intro.ipynb @@ -22,7 +22,7 @@ "source": [ "This notebook is intended to present a small overview of HoloViz and the capability for data exploration, with interactive plots (show difference between matplotlib and bokeh). Many parts are based on or copied from the official [HoloViz Tutorial](https://holoviz.org/tutorial/index.html) (highly recommended for a more extensive overview of the possibilities of HoloViz).\n", "\n", - "Note: In June 2019 the project name changed from [PyViz](https://pyviz.org/) to [HoloViz](https://holoviz.org/). The reason for this is explained in this [blog post](http://blog.pyviz.org/pyviz-holoviz.html)." + "Note: In June 2019 the project name changed from [PyViz](https://pyviz.org/) to [HoloViz](https://holoviz.org/). The reason for this is explained in this [blog post](https://blog.pyviz.org/pyviz-holoviz.html)." ] }, { @@ -82,7 +82,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Of course we can have a look at one variable only:" + "Of course, we can have a look at one variable only:" ] }, { @@ -115,7 +115,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can see the course of the parameter but we can not tell what was the exact temperature at January and we also cannot zoom in." + "We can see the course of the parameter, but we can not tell what was the exact temperature at January, and we also cannot zoom in." ] }, { @@ -173,7 +173,7 @@ "source": [ "But at least you can use your mouse to hover over each variable and explore their values. Furthermore, by clicking on the legend the colors can be switched on/off. Still, different magnitudes make it hard to see all parameters at once.\n", "\n", - "Here the interactive features are provided by the [Bokeh](http://bokeh.pydata.org) JavaScript-based plotting library. But what's actually returned by this call is a overlay of something called a [HoloViews](http://holoviews.org) object, here specifically a HoloViews [Curve](http://holoviews.org/reference/elements/bokeh/Curve.html). HoloViews objects *display* as a Bokeh plot, but they are actually much richer objects that make it easy to capture your understanding as you explore the data." + "Here the interactive features are provided by the [Bokeh](https://bokeh.pydata.org) JavaScript-based plotting library. But what's actually returned by this call is an overlay of something called a [HoloViews](https://holoviews.org) object, here specifically a HoloViews [Curve](https://holoviews.org/reference/elements/bokeh/Curve.html). HoloViews objects *display* as a Bokeh plot, but they are actually much richer objects that make it easy to capture your understanding as you explore the data." ] }, { @@ -306,7 +306,7 @@ "As you can see, with HoloViews you don't have to select between plotting your data and working with it numerically. Any HoloViews object will let you do *both* conveniently; you can simply choose whatever representation is the most appropriate way to approach the task you are doing. This approach is very different from a traditional plotting program, where the objects you create (e.g. a Matplotlib figure or a native Bokeh plot) are a dead end from an analysis perspective, useful only for plotting. \n", "### HoloViews Elements\n", "\n", - "Holoview objects merge the visualization with the data. For an Holoview object you have to classify what the data is showing. A Holoview object could be initialised in several ways: \n", + "Holoview objects merge the visualization with the data. For a Holoview object you have to classify what the data is showing. A Holoview object could be initialised in several ways:\n", "\n", "```\n", "hv.Element(data, kdims=None, vdims=None, **kwargs)\n", @@ -314,7 +314,7 @@ "\n", "This standard signature consists of the same five types of information:\n", "\n", - "- **``Element``**: any of the dozens of element types shown in the [reference gallery](http://holoviews.org/reference/index.html).\n", + "- **``Element``**: any of the dozens of element types shown in the [reference gallery](https://holoviews.org/reference/index.html).\n", "- **``data``**: your data in one of a number of formats described below, such as tabular dataframes or multidimensional gridded Xarray or Numpy arrays.\n", "- **``kdims``**: \"key dimension(s)\", also called independent variables or index dimensions in other contexts---the values for which your data was measured.\n", "- **``vdims``**: \"value dimension(s)\", also called dependent variables or measurements---what was measured or recorded for each value of the key dimensions. \n", @@ -338,7 +338,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The example also shows two ways of labeling the variables, one is directly by the initialisation with tuples ```('x','x_label')``` and ```('y','y_label')``` and a other option is to use ```.redim.label()```.\n", + "The example also shows two ways of labeling the variables, one is directly by the initialisation with tuples ```('x','x_label')``` and ```('y','y_label')``` and another option is to use ```.redim.label()```.\n", "\n", "The example above also shows the simple syntax to create a layout of different Holoview Objects by using `+`. With `*` you can simply overlay the objects in one plot:" ] @@ -360,7 +360,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "With ```.opts()``` you can change some characteristics of the Holoview Objects and you can use the `[tab]` key completion to see, what options are available or you can use the ```hv.help()``` function to get more information about some `Elements`." + "With ```.opts()``` you can change some characteristics of the Holoview Objects and you can use the `[tab]` key completion to see, what options are available, or you can use the ```hv.help()``` function to get more information about some `Elements`." ] }, { @@ -399,9 +399,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "So here we created a ```Curve``` Element for some Parameters and put them together in subplots by using `+` and overlay some in one subplot with `*`. With ```.opts()``` I define the color of some parameters and set the ```width``` and ```height``` propertie for the used ```Curve``` Elements and with ```.cols()``` I define the number of columns. \n", + "So here we created a ```Curve``` Element for some Parameters and put them together in subplots by using `+` and overlay some in one subplot with `*`. With ```.opts()``` I define the color of some parameters and set the ```width``` and ```height``` properties for the used ```Curve``` Elements and with ```.cols()``` I define the number of columns.\n", "\n", - "Now we can zoom in and use a hover for data exploration and because all Holoview Objects using the same dataframe and the same key variable the x-axes of all plots are linked. So when you zoom in in one plot all the other plots are zoomed in as well.\n", + "Now we can zoom in and use a hover for data exploration and because all Holoview Objects using the same dataframe and the same key variable the x-axes of all plots are linked. So when you zoom in one plot all the other plots are zoomed in as well.\n", "\n", "### HoloView Dataset and HoloMap Objects\n", "\n", @@ -506,7 +506,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here now no widget is created, instead there is a interactive legend where we can turn the color *on* by clicking in the legend on it. So we can compare the months with each other (for example the same month in different years).\n", + "Here now no widget is created, instead there is an interactive legend where we can turn the color *on* by clicking in the legend on it. So we can compare the months with each other (for example the same month in different years).\n", "\n", "It is also easy to look at some mean values, for example looking at mean diurnal values for each month and year you can use ```.aggregate```, which combine the values after the given function:" ] @@ -590,7 +590,7 @@ "source": [ "### Tile sources\n", "\n", - "Tile sources are very convenient ways to provide geographic context for a plot and they will be familiar from the popular mapping services like Google Maps and Openstreetmap. The ``WMTS`` element provides an easy way to include such a tile source in your visualization simply by passing it a valid URL template. GeoViews provides a number of useful tile sources in the ``gv.tile_sources`` module:" + "Tile sources are very convenient ways to provide geographic context for a plot, and they will be familiar from the popular mapping services like Google Maps and OpenStreetMap. The ``WMTS`` element provides an easy way to include such a tile source in your visualization simply by passing it a valid URL template. GeoViews provides a number of useful tile sources in the ``gv.tile_sources`` module:" ] }, { @@ -669,7 +669,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And so similar a visualisation is stored for each GeoView Element, which can be used like an HoloView Object. So as a last example you also can plot all European glaciers in one interactive plot by using an Polygons Element of GeoViews:" + "And so similar a visualisation is stored for each GeoView Element, which can be used like an HoloView Object. So as a last example you also can plot all European glaciers in one interactive plot by using a Polygons Element of GeoViews:" ] }, { @@ -686,7 +686,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "So this only was a very small look at the capability of HoloViz for data exploration and visualisation. There are much more you can do with HoloViz, but I think it is a package you should have a look at, because with only a few lines of code you can create an interactive plot which allow you to have an quick but also deep look at your data. I really recommend to visit the official [HoloViz Tutorial](https://holoviz.org/tutorial/index.html) and start using HoloViz :)" + "So this only was a very small look at the capability of HoloViz for data exploration and visualisation. There are much more you can do with HoloViz, but I think it is a package you should have a look at, because with only a few lines of code you can create an interactive plot which allow you to have a quick but also deep look at your data. I really recommend to visit the official [HoloViz Tutorial](https://holoviz.org/tutorial/index.html) and start using HoloViz :)" ] }, { diff --git a/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb b/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb index ae23abbe..6761ee4f 100644 --- a/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb +++ b/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "After running our OGGM experiments we often want to compare the model output to other gridded observations or maybe we want to use additional data sets that are not currently in the [OGGM shop](https://docs.oggm.org/en/stable/input-data.html) to calibrate parameters in the model (e.g. Glen A creep parameter, sliding parameter or the calving constant of proportionality). If you are looking on ways or ideas on how to do this, you are in the right tutorial!\n", + "After running our OGGM experiments we often want to compare the model output to other gridded observations, or maybe we want to use additional data sets that are not currently in the [OGGM shop](https://docs.oggm.org/en/stable/input-data.html) to calibrate parameters in the model (e.g. Glen A creep parameter, sliding parameter or the calving constant of proportionality). If you are looking on ways or ideas on how to do this, you are in the right tutorial!\n", "\n", "In OGGM, a local map projection is defined for each glacier entity in the RGI inventory following the methods described in [Maussion and others (2019)](https://gmd.copernicus.org/articles/12/909/2019/). The model uses a Transverse Mercator projection centred on the glacier. A lot of data sets, especially those from Polar regions can have a different projections and if we are not careful, we would be making mistakes when we compare them with our model output or when we use such data sets to constrain our model experiments.\n", "\n", @@ -63,7 +63,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Lets define the glaciers for the run " + "## Let's define the glaciers for the run" ] }, { @@ -227,7 +227,7 @@ " output_folder=None, # by default the final file is saved at cfg.PATHS['working_dir']\n", " output_filename='gridded_data_merged', # the default file is saved as gridded_data_merged.nc\n", " included_variables='all', # you also can provide a list of variables here\n", - " add_topography=False, # here we can add topography for the new extend\n", + " add_topography=False, # here we can add topography for the new extent\n", " reset=False, # set to True if you want to overwrite an already existing file (for playing around)\n", ")" ] @@ -281,7 +281,7 @@ "source": [ "## Add data from OGGM-Shop: bed topography data\n", "\n", - "Additionally to the data produced by the model, the [OGGM-Shop](https://docs.oggm.org/en/stable/input-data.html) counts with routines that will automatically download and reproject other useful data sets into the glacier projection (For more information also check out this [notebook](https://oggm.org/tutorials/stable/notebooks/oggm_shop.html)). This data will be stored under the file described above. " + "Additionally, to the data produced by the model, the [OGGM-Shop](https://docs.oggm.org/en/stable/input-data.html) counts with routines that will automatically download and reproject other useful data sets into the glacier projection (For more information also check out this [notebook](https://oggm.org/tutorials/stable/notebooks/oggm_shop.html)). This data will be stored under the file described above." ] }, { @@ -376,7 +376,7 @@ "\n", "If you want more velocity products, feel free to open a new topic on the OGGM issue tracker!\n", "\n", - "> this will download severals large datasets **depending on your connection, it might take some time** ..." + "> this will download several large datasets **depending on your connection, it might take some time** ..." ] }, { @@ -511,7 +511,7 @@ " bin_variables=['consensus_ice_thickness', \n", " 'millan_vx',\n", " 'millan_vy'],\n", - " preserve_totals=[True, False, False] # I\"m actually not sure if preserving totals is meaningful with velocities - likely not\n", + " preserve_totals=[True, False, False] # I am actually not sure if preserving totals is meaningful with velocities - likely not\n", " # NOTE: we could bin variables according to max() as well!\n", " )" ] @@ -795,7 +795,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Inversion velocities are for a glacier at equilibrium - this is not always meaningful. Lets do a run and store the velocities with time:" + "Inversion velocities are for a glacier at equilibrium - this is not always meaningful. Let's do a run and store the velocities with time:" ] }, { diff --git a/notebooks/tutorials/inversion.ipynb b/notebooks/tutorials/inversion.ipynb index d192bbfb..0140efbf 100644 --- a/notebooks/tutorials/inversion.ipynb +++ b/notebooks/tutorials/inversion.ipynb @@ -26,7 +26,7 @@ "\n", "There is no reason to think that the ice parameters are the same between\n", "neighboring glaciers. There is currently no \"good\" way to calibrate them,\n", - "or at least no generaly accepted one.\n", + "or at least no generally accepted one.\n", "We won't discuss the details here, but we provide a script to illustrate\n", "the sensitivity of the model to this choice.\n", "\n", @@ -141,7 +141,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The data are stored as csv files in the working directory. The easiest way to read them is to use [pandas](http://pandas.pydata.org/)!" + "The data are stored as csv files in the working directory. The easiest way to read them is to use [pandas](https://pandas.pydata.org/)!" ] }, { @@ -515,7 +515,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This is however not always very useful because OGGM can only plot on a map as large as the local glacier map of the first glacier in the list. See [this issue](https://github.com/OGGM/oggm/issues/1007) for a discussion about why. In this case, we had a large enough border, and like that all neighboring glacers are visible." + "This is however not always very useful because OGGM can only plot on a map as large as the local glacier map of the first glacier in the list. See [this issue](https://github.com/OGGM/oggm/issues/1007) for a discussion about why. In this case, we had a large enough border, and like that all neighboring glaciers are visible." ] }, { diff --git a/notebooks/tutorials/ioggm.ipynb b/notebooks/tutorials/ioggm.ipynb index 431533b0..4a8f745e 100644 --- a/notebooks/tutorials/ioggm.ipynb +++ b/notebooks/tutorials/ioggm.ipynb @@ -9,7 +9,7 @@ "\n", "This tutorial gives you the tools to run IGM within OGGM and also compare it with OGGM runs. \n", "\n", - "**This is very much work in progress.** You'll need an IGM installation for this to run. The notebook currently does not run on OGGM Hub, because of the Tensorflow depedency. We are working on it!" + "**This is very much work in progress.** You'll need an IGM installation for this to run. The notebook currently does not run on OGGM Hub, because of the Tensorflow dependency. We are working on it!" ] }, { @@ -240,7 +240,7 @@ "\n", "# set values outside the glacier to np.nan\n", "# using the glacier mask, as otherwise there is more ice from surrounding glaciers in the domain, \n", - "# which shouldn't accumulate more ice, still adds to the total volume/area of the domain.. either mask it out beforehand or before doing plots.\n", + "# which shouldn't accumulate more ice, still adds to the total volume/area of the domain ... either mask it out beforehand or before doing plots.\n", "# experiment with it: does the mass outside of the mask only decrease? => ?\n", "gd['cook23_thk_masked'] = xr.where(gd.glacier_mask, gd.cook23_thk, np.nan)\n", "\n" diff --git a/notebooks/tutorials/kcalving_parameterization.ipynb b/notebooks/tutorials/kcalving_parameterization.ipynb index dd46b671..9a0ed23f 100644 --- a/notebooks/tutorials/kcalving_parameterization.ipynb +++ b/notebooks/tutorials/kcalving_parameterization.ipynb @@ -249,7 +249,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's have some fun! We apply a periodic forcing to our glacier and study the advance and retreat of our glacier (the simulation below can take a couple minutes to run)." + "Let's have some fun! We apply a periodic forcing to our glacier and study the advance and retreat of our glacier (the simulation below can take a couple of minutes to run)." ] }, { @@ -363,7 +363,7 @@ "- sliding is increased when below water\n", "- some other logics and checks that where needed to make calving more realistic in edge cases (this is a large part of the code)\n", "\n", - "Otherwise it works pretty much like the existing one:" + "Otherwise, it works pretty much like the existing one:" ] }, { diff --git a/notebooks/tutorials/massbalance_calibration.ipynb b/notebooks/tutorials/massbalance_calibration.ipynb index 23769108..35731b60 100644 --- a/notebooks/tutorials/massbalance_calibration.ipynb +++ b/notebooks/tutorials/massbalance_calibration.ipynb @@ -13,7 +13,7 @@ "source": [ "The default mass-balance (MB) model of OGGM is a very standard [temperature index melt model](https://www.sciencedirect.com/science/article/pii/S0022169403002579). \n", "\n", - "In versions before 1.6, OGGM had a complex calibration procedure which originated from the times where we had only observations from a few hundred glaciers. We used them to calibrate the model and then a so-called infamous *tstar* (infamous in very niche circles of first- and second-gen OGGMers) which was interpolated to glaciers without observations (see the [original publication](https://www.the-cryosphere.net/6/1295/2012/tc-6-1295-2012.html)). This method was very powerful but, as new observational datasets emerged, we can now calibrate on a glacier-per-glacier basis. With the new era of geodetic observations, OGGM uses the average geodetic observations from Jan 2000--Jan 2020 of [Hugonnet al. 2021](https://www.nature.com/articles/s41586-021-03436-z), that are now available for almost every glacier world-wide. \n", + "In versions before 1.6, OGGM had a complex calibration procedure which originated from the times, where we had only observations from a few hundred glaciers. We used them to calibrate the model and then a so-called infamous *tstar* (infamous in very niche circles of first- and second-gen OGGMers) which was interpolated to glaciers without observations (see the [original publication](https://www.the-cryosphere.net/6/1295/2012/tc-6-1295-2012.html)). This method was very powerful but, as new observational datasets emerged, we can now calibrate on a glacier-per-glacier basis. With the new era of geodetic observations, OGGM uses the average geodetic observations from Jan 2000--Jan 2020 of [Hugonnet al. 2021](https://www.nature.com/articles/s41586-021-03436-z), that are now available for almost every glacier world-wide.\n", "\n", "Pre-processed directories from OGGM (from the Bremen server) have been calibrated for you, based on a specific climate dataset (W5E5) and our own dedicated calibration strategy. But, what if you want to use another climate dataset? Or another reference dataset?\n", "\n", @@ -40,9 +40,7 @@ "import matplotlib\n", "import pandas as pd\n", "import numpy as np\n", - "import os\n", "\n", - "import oggm\n", "from oggm import cfg, utils, workflow, tasks, graphics\n", "from oggm.core import massbalance\n", "from oggm.core.massbalance import mb_calibration_from_scalar_mb, mb_calibration_from_geodetic_mb, mb_calibration_from_wgms_mb" @@ -56,14 +54,14 @@ "source": [ "cfg.initialize(logging_level='WARNING')\n", "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-calib-mb', reset=True)\n", - "cfg.PARAMS['border'] = 10" + "cfg.PARAMS['border'] = 80 # 10, todo: replace back to 10, once available" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We start from two well known glaciers in the Austrian Alps, Kesselwandferner and Hintereisferner. But you can also choose any other other glacier, e.g. from [this list](https://github.com/OGGM/oggm-sample-data/blob/master/wgms/rgi_wgms_links_20220112.csv). " + "We start from two well known glaciers in the Austrian Alps, Kesselwandferner and Hintereisferner. But you can also choose any other glacier, e.g. from [this list](https://github.com/OGGM/oggm-sample-data/blob/master/wgms/rgi_wgms_links_20220112.csv)." ] }, { @@ -73,7 +71,7 @@ "outputs": [], "source": [ "# we start from preprocessing level 3\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier'\n", "gdirs = workflow.init_glacier_directories(['RGI60-11.00787', 'RGI60-11.00897'], from_prepro_level=3, prepro_base_url=base_url)" ] }, @@ -137,7 +135,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This is called the \"baseline climate\" in OGGM and is necessary to calibrate the model against observations. Ideally, the baseline climate should be real observations as perfect as possible, but in reality this is not the case. Often, gridded climate datasets have biases - we need to take this into accound during our calibration. Let's have a look at the mass balance parameters for both glaciers:" + "This is called the \"baseline climate\" in OGGM and is necessary to calibrate the model against observations. Ideally, the baseline climate should be real observations as perfect as possible, but in reality this is not the case. Often, gridded climate datasets have biases - we need to take this into account during our calibration. Let's have a look at the mass balance parameters for both glaciers:" ] }, { @@ -166,11 +164,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will explain later what all these mean. Lets focus on these for now: `melt_f`, `prcp_fac`, and `temp_bias` which have been calibrated with `reference_mb` over the reference period `reference_period`.\n", + "We will explain later what all these mean. Let's focus on these for now: `melt_f`, `prcp_fac`, and `temp_bias` which have been calibrated with `reference_mb` over the reference period `reference_period`.\n", "\n", "Per default the [Hugonnet et al. (2021)](https://www.nature.com/articles/s41586-021-03436-z) average geodetic observation is used over the entire time period Jan 2000 to Jan 2020 to calibrate the MB model parameter(s) for every single glacier.\n", "\n", - "For our two neighboring example glaciers, that share the same climate gridpoint, the same pre-calibrated `temp_bias` and the same `melt_f` is used. The `prcp_fac` is slighly different, hence in that example, changing the `prcp_fac` within the narrow range of $[0.8,1.2]$ was sufficient to match the MB model to the observations. \n", + "For our two neighboring example glaciers, that share the same climate gridpoint, the same pre-calibrated `temp_bias` and the same `melt_f` is used. The `prcp_fac` is slightly different, hence in that example, changing the `prcp_fac` within the narrow range of $[0.8,1.2]$ was sufficient to match the MB model to the observations.\n", "\n", "Note that the two glaciers are in the same climate (from the forcing data) but are very different in size, orientation and geometry. So, at least one of the MB model parameters is different (here the `prcp_fac`) and also the observed MB values vary:" ] @@ -212,7 +210,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note alst that there are some global MB parameters (`mb_global_params`), which we assume to be the same globally for every glacier:" + "Note also that there are some global MB parameters (`mb_global_params`), which we assume to be the same globally for every glacier:" ] }, { @@ -295,7 +293,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Perfect! Our MB model reproduces the average observed MB, so the calibrated worked!" + "Perfect! Our MB model reproduces the average observed MB, so the calibration worked!" ] }, { @@ -323,7 +321,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "For example, let's calibrate the `melt_f` to match the same observations as before (i.e., average geodetic observation) and fix the other parameters. We will use the more general function [mb_calibration_from_scalar_mb](https://docs.oggm.org/en/latest/generated/oggm.tasks.mb_calibration_from_scalar_mb.html) which is most flexible. However, it requires you to give manually the observed MB on which you want to calibrate:" + "For example, let's calibrate the `melt_f` to match the same observations as before (i.e., average geodetic observation) and fix the other parameters. We will use the more general function [mb_calibration_from_scalar_mb](https://docs.oggm.org/en/latest/generated/oggm.tasks.mb_calibration_from_scalar_mb.html) which is more flexible. However, it requires you to give manually the observed MB on which you want to calibrate:" ] }, { @@ -379,7 +377,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Only `melt_f` is used to calibrate the MB model. `prcp_fac` was chosen depending on the average winter precipitation (explained below) and `temp_bias` is 0. " + "Only `melt_f` is used to calibrate the MB model. `prcp_fac` was chosen depending on the average winter precipitation (explained below) and `temp_bias` is 0." ] }, { @@ -537,7 +535,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Ok, but actually you might trust the in-situ observations much more than the geodetic observation. So if you are only interested in glaciers with these in-situ observations, you can also use the in-situ observations for calibration. This might allow you also to calibrate over a longer time period and to use additional informations (such as the interannual MB variability, seasonal MB, ...). However, bare in mind that there are often gaps in the in-situ MB time series and make sure that you calibrate to the same modelled time period!\n", + "Ok, but actually you might trust the in-situ observations much more than the geodetic observation. So if you are only interested in glaciers with these in-situ observations, you can also use the in-situ observations for calibration. This might allow you also to calibrate over a longer time period and to use additional information (such as the interannual MB variability, seasonal MB, ...). However, bear in mind that there are often gaps in the in-situ MB time series and make sure that you calibrate to the same modelled time period!\n", "\n", "Attention: For the Hintereisferner glacier with an almost 70-year long time series, the assumption that we make, i.e., that the area does not change over the time period, gets more problematic. So think twice before repeating this at home!" ] @@ -715,7 +713,7 @@ "source": [ "If we believe that our observations are true, maybe the climate or the processes represented by the MB model are erroneous. \n", "\n", - "What can we do to still match the `ref_mb`? We can either change the `melt_f` ranges by setting other values to the parameters above or we can allow that another MB model parameter is changed (i.e., `calibrate_param2`). This is basically very similar to the three-step-calibration \n", + "What can we do to still match the `ref_mb`? We can either change the `melt_f` ranges by setting other values to the parameters above, or we can allow that another MB model parameter is changed (i.e., `calibrate_param2`). This is basically very similar to the three-step-calibration\n", "first introduced in [Huss & Hock 2015](https://doi.org/10.3389/feart.2015.00054), but here you can choose your parameter ranges and parameter order yourself. \n", "\n", "To reduce these MB model calibration errors, you can first change the `melt_f`, fix it at the lower or upper limit, and then change the `temp_bias`." @@ -753,7 +751,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the same way, you can also use your own MB observations or fake data to calibrate the MB model. We use here as an example, an unrealistically high positive MB for the Hintereisferner over a 10-year time period. To allow the calibration to happen, we need again to set `calibrate_param2`! Here we will use the `prcp_fac` as second parameter for a change:" + "In the same way, you can also use your own MB observations or fake data to calibrate the MB model. We use here as an example, an unrealistically high positive MB for the Hintereisferner over a 10-year period. To allow the calibration to happen, we need again to set `calibrate_param2`! Here we will use the `prcp_fac` as second parameter for a change:" ] }, { @@ -763,7 +761,7 @@ "outputs": [], "source": [ "ref_period = '2000-01-01_2010-01-01'\n", - "ref_mb = 2000 # Let's use an unrealistically positive mass-balance\n", + "ref_mb = 2000 # Let's use an unrealistically positive mass-balance\n", "mb_calibration_from_scalar_mb(gdir_hef, ref_mb=ref_mb,\n", " ref_period=ref_period, \n", " overwrite_gdir=True, \n", @@ -871,14 +869,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Overparameteristion or the magic choice of the best calibration option:" + "## Overparameterisation or the magic choice of the best calibration option:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We found already some combinations that equally well match the average MB over a given time period. As we only use only one observation per glacier (i.e., per default the average geodetic MB from 2000-2020), but have up to three free MB model parameters, the MB model is overparameterised. That means, there are in theory an infinite amount of calibration options possible that equally well match the one obervation. Let's look a bit more systematically into that:\n", + "We found already some combinations that equally well match the average MB over a given time period. As we only use one observation per glacier (i.e., per default the average geodetic MB from 2000-2020), but have up to three free MB model parameters, the MB model is overparameterised. That means, there are in theory an infinite amount of calibration options possible that equally well match the one observation. Let's look a bit more systematically into that:\n", "\n", "We will use a range of different `prcp_fac` and then calibrate the `melt_f` accordingly to always match to the default average MB (`ref_mb`) over the reference period (`ref_period`)." ] @@ -977,7 +975,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The larger the `prcp_fac` and the `melt_f`, the larger is the interannual MB variability. For glaciers with in-situ observations, we can find a combination of `prcp_fac` and `melt_f` that has a similar interannnual MB variability than the observations. For example, you can choose a MB model parameter combination where the standard deviation quotient of the annual the modelled and observed MB is near to 1: " + "The larger the `prcp_fac` and the `melt_f`, the larger is the interannual MB variability. For glaciers with in-situ observations, we can find a combination of `prcp_fac` and `melt_f` that has a similar interannual MB variability than the observations. For example, you can choose a MB model parameter combination where the standard deviation quotient of the annual modelled and observed MB is near to 1:" ] }, { @@ -1024,7 +1022,7 @@ " mb_temp_b_sens = massbalance.MonthlyTIModel(gdir_hef)\n", " # ok, we actually matched the new ref_mb\n", " spec_mb_temp_b_sens_dict[temp_bias] = mb_temp_b_sens.get_specific_mb(h, w, year=np.arange(2000,2020,1))\n", - " except RuntimeError: #, 'RGI60-11.00897: ref mb not matched. Try to set calibrate_param2'\n", + " except RuntimeError: # 'RGI60-11.00897: ref mb not matched. Try to set calibrate_param2'\n", " pass" ] }, @@ -1096,7 +1094,7 @@ " mb_pf_temp_b_sens = massbalance.MonthlyTIModel(gdir_hef)\n", " # ok, we actually matched the new ref_mb\n", " spec_mb_pf_temp_b_sens_dict[temp_bias] = mb_pf_temp_b_sens.get_specific_mb(h, w, year=np.arange(2000,2020,1))\n", - " except RuntimeError: #, 'RGI60-11.00897: ref mb not matched. Try to set calibrate_param2'\n", + " except RuntimeError: # 'RGI60-11.00897: ref mb not matched. Try to set calibrate_param2'\n", " pass" ] }, @@ -1169,7 +1167,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the [preprocessed directories](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/) MB model calibration is done by using `mb_calibration_from_geodetic_mb` to determine the MB model parameters. We will reproduce the option from the preprocessed directory first:" + "In the [preprocessed directories](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6) MB model calibration is done by using `mb_calibration_from_geodetic_mb` to determine the MB model parameters. We will reproduce the option from the preprocessed directory first:" ] }, { @@ -1222,7 +1220,7 @@ "source": [ "`prcp_fac` is chosen from a relation to the average winter precipitation. Fig.1 (below) shows the used relationship between winter precipitation and `prcp_fac`.\n", "\n", - "It was calibrated by adapting `melt_f` and `prcp_fac` to match the average geodetic and winter MB on around 100 glaciers with both informations available. The found relationship of decreasing `prcp_fac` for increasing winter precipitation makes sense, as glaciers with already a large winter precipitation should not be corrected with a large multiplicative `prcp_fac`.\n", + "It was calibrated by adapting `melt_f` and `prcp_fac` to match the average geodetic and winter MB on around 100 glaciers where both datasets are available. [Here](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/_notebooks/oggm_v16_winter_mb_match_to_prescribe_prcp_fac/figures/RGI60-13.05504_winter_mb_GSWP3_W5E5.png) is an example figure, which shows how the precipitation factor was chosen for such a glacier. The found relationship of decreasing `prcp_fac` for increasing winter precipitation makes sense, as glaciers with already a large winter precipitation should not be corrected with a large multiplicative `prcp_fac`.\n", "\n", "
\n", " \n", @@ -1246,14 +1244,21 @@ "prcp_fac_array = utils.clip_array(prcp_fac, r0, r1)\n", "plt.plot(w_prcp_array, prcp_fac_array)\n", "plt.xlabel(r'winter daily mean precipitation' +'\\n'+r'(kg m$^{-2}$ day$^{-1}$)')\n", - "plt.ylabel('precipitation factor (prcp_fac)'); plt.title('Fig. 1');" + "plt.ylabel('precipitation factor (prcp_fac)'); plt.title('Fig. 1 (only valid for W5E5 and the specific standard OGGM settings)');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Second step: data informed temperature bias " + "This relationship is only valid for W5E5 with the specific standard OGGM settings. We repeated the same approach with ERA5, and found that the precipitation factor decreases rather linearly with the winter precipitation for this dataset ([figure that shows the fits for both climate datasets and includes the underlying winter-MB-calibrated precipitation factors](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/_notebooks/oggm_v16_winter_mb_match_to_prescribe_prcp_fac/comparison_oggm_v16_GSWP3_W5E5_vs_ERA5_calib_log_fit_pf_distribution_change_monthly_cte_melt_f_minus_1.png)). The higher resolution of ERA5 compared to W5E5 may explain the different estimated relationships. Further details are provided in the [jupyter notebook used to estimate the fits](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/_notebooks/oggm_v16_winter_mb_match_to_prescribe_prcp_fac/match_winter_mb_w5e5_era5.ipynb)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Second step: data informed temperature bias" ] }, { @@ -1262,7 +1267,7 @@ "source": [ "Using this data-informed precipitation factor, we then do a global calibration where the temperature bias (`temp_bias`) is calibrated, while the melt factor (`melt_f`) is fixed at 5 kg m-2 day-1 K-1 (default value based on [Schuster et al., 2023](https://doi.org/10.1017/aog.2023.57)). \n", "\n", - "The idea is that if many glaciers within the same grid point need a temperature bias to reach the obeserved MB, this indicates that a systematic correction is necessary (at least for this MB model in particular). In fact, we can plot the median bias required to match MB observations using this technique, which gives us the following plot:\n", + "The idea is that if many glaciers within the same grid point need a temperature bias to reach the observed MB, this indicates that a systematic correction is necessary (at least for this MB model in particular). In fact, we can plot the median bias required to match MB observations using this technique, which gives us the following plot (here for OGGM v1.6.1, but it is very similar in v1.6.3):\n", "\n", "![err](https://user-images.githubusercontent.com/10050469/224318400-ec1d8825-d7e7-4cdb-94f3-ebb95b8f7120.jpg)" ] @@ -1273,11 +1278,11 @@ "source": [ "The fact that the `temp_bias` parameter is spatially correlated (many regions are all blue or red) indicate that something in the data needs to be corrected for our model. It is this information that we use to inform the next step.\n", "\n", - "**The code we used for this step is available [here](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/calibration/1.6.1/). As explained above, we do a global run with fixed precip factor and melt factor, then store the resulting parameters in a csv file used by OGGM. The csv file can be found [here](https://cluster.klima.uni-bremen.de/~oggm/ref_mb_params/oggm_v1.6/w5e5_temp_bias_v2023.4.csv).**\n", + "**The code we used for this step is available [here](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/calibration/1.6.3/). As explained above, we do a global run with a fixed precipitation factor and melt factor, then store the resulting parameters in a csv file used by OGGM. This is repeated for each climate dataset and RGI version. The csv file using W5E5 and RGI 6.2 can be found [here](https://cluster.klima.uni-bremen.de/~oggm/ref_mb_params/oggm_v1.6/w5e5_rgi6_perglacier_temp_bias_v2025.6.2.csv).**\n", "\n", "
\n", " \n", - " Similar to the precip factor, we do not necessarily recommend our users to use this method in their own calibration effort. We don't think it's a bad method, but given the importance of calibration we think that diversity is important!\n", + " Similar to the precipitation factor, we do not necessarily recommend our users to use this method in their own calibration effort. We don't think it's a bad method, but given the importance of calibration we think that diversity is important!\n", " \n", "
" ] @@ -1295,9 +1300,11 @@ "source": [ "Finally, we now run [mb_calibration_from_scalar_mb](https://docs.oggm.org/en/latest/generated/oggm.tasks.mb_calibration_from_scalar_mb.html) again for each glacier, as follows:\n", "- use the first guess: `melt_f` = 5, `prcp_fac` = data-informed from step 1, `temp_bias` = data-informed from step 2\n", - "- if this doesn't match (this would be highly unlikely), allow `prcp_fac` to vary again between 0.8 and 1.2 times the original guess ($\\pm$20%). This is justified by the fact that the first guess for precipitation is also highly uncertain. If that worked, the calibration stops (33.6% of all glaciers worldwide are calibrated this way, for 41.1% of the total area).\n", - "- if the above did not work, allow `melt_f` to vary again. If that worked, the calibration stops (60.6% of all glaciers worldwide are calibrated this way, for 41.1% of the total area).\n", - "- finally, if the above did not work, allow `temp_bias` to vary again (5.9% of all glaciers worldwide are calibrated this way, for 2.2% of the total area).\n", + "- if this doesn't match (this would be highly unlikely), allow `prcp_fac` to vary again between 0.8 and 1.2 times the original guess ($\\pm$20%). This is justified by the fact that the first guess for precipitation is also highly uncertain. If that worked, the calibration stops (33.6% of all glaciers worldwide are calibrated this way, for 41.3% of the total area).\n", + "- if the above did not work, allow `melt_f` to vary again. If that worked, the calibration stops (60.4% of all glaciers worldwide are calibrated this way, for 56.4% of the total area).\n", + "- finally, if the above did not work, allow `temp_bias` to vary again (6.0% of all glaciers worldwide are calibrated this way, for 2.2% of the total area).\n", + "\n", + "*(numbers computed by the [massbalance_global_params.ipynb notebook](massbalance_global_params.ipynb) and are valid for the OGGM v1.6.3 (2025.6) W5E5 gdir)*\n", "\n", "
\n", " \n", @@ -1335,8 +1342,8 @@ "- We can use different observational data for calibration: \n", " - calibrating to geodetic observations using different time periods, to in-situ direct glaciological observations from the WGMS (if available) or to other custom MB data. \n", "- There exist different ways of calibrating to the average observational data:\n", - " - You can use the same option (`informed_threestep`) as done operationally for the [preprocessed directories (L>=3)](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5/) on all glaciers world-wide \n", - " - you can calibrate the `melt_f`, and have the `prcp_fac` and `temp_bias` fixed. If the calibration does not work, the `temp_bias` can be varied aswell. \n", + " - You can use the same option (`informed_threestep`) as done operationally for the [preprocessed directories (L>=3)](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6) on all glaciers world-wide\n", + " - you can calibrate the `melt_f`, and have the `prcp_fac` and `temp_bias` fixed. If the calibration does not work, the `temp_bias` can be varied as well.\n", " - you can also calibrate instead on `prcp_fac` or `temp_bias` and fix the other parameters.\n", "- However, we showed that the parameter combination choice has an influence on other estimates than the average MB (which then influences also future projections). \n", "- As user of OGGM, you might just use the calibrated MB model parameters from the preprocessed directories. Nevertheless, it is good to be aware of the overparameterisation problem. In addition, if you want to include uncertainties of the MB model calibration, you could include additional experiments that use another calibration option and create your own preprocessed directories with these options. With more available observational data or improved climate data, you might also be able to use better ways to calibrate the MB model parameters. \n" @@ -1357,9 +1364,9 @@ "metadata": { "hide_input": false, "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "oggm_v16", "language": "python", - "name": "python3" + "name": "oggm_v16" }, "language_info": { "codemirror_mode": { @@ -1371,7 +1378,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.11.3" }, "latex_envs": { "LaTeX_envs_menu_present": true, diff --git a/notebooks/tutorials/massbalance_global_params.ipynb b/notebooks/tutorials/massbalance_global_params.ipynb index 3553f361..6ae598e0 100644 --- a/notebooks/tutorials/massbalance_global_params.ipynb +++ b/notebooks/tutorials/massbalance_global_params.ipynb @@ -31,8 +31,7 @@ "from oggm import utils\n", "import pandas as pd\n", "import numpy as np\n", - "import matplotlib.pyplot as plt\n", - "import seaborn as sns" + "import matplotlib.pyplot as plt" ] }, { @@ -51,9 +50,9 @@ "outputs": [], "source": [ "# W5E5 elevbands, no spinup \n", - "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/RGI62/b_080/L5/summary/'\n", + "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/RGI62/b_080/L5/summary/'\n", "\n", - "# this can take some time to downloan\n", + "# this can take some time to download\n", "df = []\n", "for rgi_reg in range(1, 19):\n", " fpath = utils.file_downloader(url + f'glacier_statistics_{rgi_reg:02d}.csv')\n", @@ -71,7 +70,7 @@ "id": "5", "metadata": {}, "source": [ - "## Global statistics of OGGM's 1.6.1 \"informed three steps\" method" + "## Global statistics of OGGM's 1.6.3 \"informed three steps\" method" ] }, { @@ -79,7 +78,7 @@ "id": "6", "metadata": {}, "source": [ - "As explained in the [mass-balance calibration procedure in v1.6](massbalance_calibration.ipynb) notebook, the \"informed three steps\" method provides first guesses for the precipitation factor and the temperature bias. We then calibrate each glacier in three steps - let's check the number of glaciers calibrated this way:" + "As explained in the [mass-balance calibration procedure in v1.6](massbalance_calibration.ipynb) notebook, the \"informed three steps\" method provides first guesses for the precipitation factor and the temperature bias. We then calibrate each glacier in three steps. Let's check the number of glaciers calibrated this way:" ] }, { @@ -89,7 +88,7 @@ "source": [ "Step 0: use the first guess `melt_f` = 5, `prcp_fac` = data-informed from winter precipitation, `temp_bias` = data-informed from the global calibration with fixed parameters (see [mass-balance calibration procedure in v1.6](massbalance_calibration.ipynb) for details). \n", "\n", - "Step 1: if Step 0 doesn't match (only likely to happen if there is one isolated glacier in a climate grid point), allow `prcp_fac` to vary again between 0.8 and 1.2 times the roiginal guess ($\\pm$20%). This is justified by the fact that the first guess for precipitation is also highly uncertain. If that worked, the calibration stops.\n", + "Step 1: if Step 0 doesn't match (only likely to happen if there is one isolated glacier in a climate grid point), allow `prcp_fac` to vary again between 0.8 and 1.2 times the original guess ($\\pm$20%). This is justified by the fact that the first guess for precipitation is also highly uncertain. If that worked, the calibration stops.\n", "\n", "To find out which glaciers have been calibrated after step 1, we count the number of glaciers with a melt factor of exactly 5:" ] @@ -116,7 +115,7 @@ "\n", "Step 3: finally, if the above did not work, allow `temp_bias` to vary again, fixing the other parameters to their last value.\n", "\n", - "To check wether these steps were successful from our files, we can compute the number of glaciers which have hit the \"hard limits\" of the allowed melt factor range, i.e. have reached step 3, and then substract them from the total:" + "To check whether these steps were successful from our files, we can compute the number of glaciers which have hit the \"hard limits\" of the allowed melt factor range, i.e. have reached step 3, and then subtract them from the total:" ] }, { @@ -260,7 +259,7 @@ "id": "20", "metadata": {}, "source": [ - "- a substantial (33%) part of all glaciers are attributed the default melt factor of 5 after the first guesses in climate data bias correction. In other words, this means that we are substantially correcting the climate forcing to \"match\" the presence of a glacier. Other calibration methods are using similar techniques (they differ in the details and the allowed range of parameter values)\n", + "- a substantial (34%) part of all glaciers are attributed the default melt factor of 5 after the first guesses in climate data bias correction. In other words, this means that we are substantially correcting the climate forcing to \"match\" the presence of a glacier. Other calibration methods are using similar techniques (they differ in the details and the allowed range of parameter values)\n", "- the large amount of glaciers with melt factor of exactly 5 is problematic, but is mitigated somewhat by the dynamical spinup (see below)\n", "- the largest bulk of the glacier area is calibrated with \"pre-informed\" precip factor and temperature bias, and have a calibrated melt factor. The resulting melt factor distribution is centered around 5 and has a long tail towards higher values.\n", "- in general, weighting the distributions by area tends to reduce the extremes." @@ -271,7 +270,7 @@ "id": "21", "metadata": {}, "source": [ - "## Influence of dynamical spinup " + "## Influence of the dynamical spinup" ] }, { @@ -279,7 +278,7 @@ "id": "22", "metadata": {}, "source": [ - "The dynamical spinup procedure (explained in this [10 minutes tutorial](../10minutes/dynamical_spinup.ipynb) and in more detail in [this tutorial](dynamical_spinup.ipynb)) starts from the parameters calibrated above with a *static* geometry and calibrate the melt factor again using an iterative procedure, making sure that the parameters and the past evolution of the glacier are consistent with the past evolution of the glacier. In doing so, it achieves two things:\n", + "The dynamical spinup procedure (explained in this [10 minutes tutorial](../10minutes/dynamical_spinup.ipynb) and in more detail in [this tutorial](dynamical_spinup.ipynb)) starts from the parameters calibrated above with a *static* geometry. The procedure calibrates the melt factor again using an iterative procedure, making sure that the parameters and the past evolution of the glacier are consistent with the past evolution of the glacier. In doing so, it achieves two things:\n", "- the *actually modelled* mass balance of glaciers during a dynamical run matches observations better than without\n", "- it reshuffles the melt factors a bit\n", "\n", @@ -294,7 +293,7 @@ "outputs": [], "source": [ "# W5E5 elevbands, with spinup \n", - "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_spinup/RGI62/b_160/L5/summary/'\n", + "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier_spinup/RGI62/b_160/L5/summary/'\n", "\n", "# this can take some time\n", "dfs = []\n", @@ -319,7 +318,7 @@ "id": "25", "metadata": {}, "source": [ - "First of all, let's see how many glaciers have had their melt factor changed as a result of the dynamical calibration (i.e. dynamical calibration was succesful):" + "First of all, let's see how many glaciers have had their melt factor changed as a result of the dynamical calibration (i.e. dynamical calibration was successful):" ] }, { @@ -412,7 +411,6 @@ }, "source": [ "## What's next?\n", - "\n", "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] diff --git a/notebooks/tutorials/massbalance_perturbation.ipynb b/notebooks/tutorials/massbalance_perturbation.ipynb index 86f0e47a..b39e0720 100644 --- a/notebooks/tutorials/massbalance_perturbation.ipynb +++ b/notebooks/tutorials/massbalance_perturbation.ipynb @@ -52,7 +52,7 @@ "outputs": [], "source": [ "cfg.initialize(logging_level='WARNING')\n", - "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-calib-pertubation', reset=True)\n", + "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-calib-perturbation', reset=True)\n", "cfg.PARAMS['border'] = 80" ] }, @@ -60,7 +60,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We start from our two well known glaciers in the Austrian Alps, Kesselwandferner and Hintereisferner. But you can also choose any other other glacier, e.g. from [this list](https://github.com/OGGM/oggm-sample-data/blob/master/wgms/rgi_wgms_links_20220112.csv). " + "We start from our two well known glaciers in the Austrian Alps, Kesselwandferner and Hintereisferner. But you can also choose any other glacier, e.g. from [this list](https://github.com/OGGM/oggm-sample-data/blob/master/wgms/rgi_wgms_links_20220112.csv)." ] }, { @@ -70,7 +70,8 @@ "outputs": [], "source": [ "# we start from preprocessing level 5\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", + "\n", "gdirs = workflow.init_glacier_directories(['RGI60-11.00787', 'RGI60-11.00897'], from_prepro_level=5, prepro_base_url=base_url)" ] }, @@ -144,7 +145,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Therefore, if you want to mess around with these parameters, \"all you have to do\" is to overwrite this file somehow, or create a new one and ask the mass balance model to read it instead of the default one. Let's do that:" + "Therefore, if you want to mess around with these parameters, \"all you have to do\" is to overwrite this file somehow. Or you create a new one and ask the mass balance model to read it instead of the default one. Let's do that:" ] }, { @@ -221,7 +222,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Let' create another \"mass balance model\" which is like the default one but with another default parameter\n", + "# Let's create another \"mass balance model\" which is like the default one but with another default parameter\n", "from functools import partial\n", "PerturbedMassBalance = partial(massbalance.MonthlyTIModel, mb_params_filesuffix='_perturbed')\n", "\n", @@ -233,7 +234,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The partial function allows to create a function that is created by fixing a certain number of arguments of another function. Here we create a new \"class\" which is the same as the default original one, but by setting one parameters to another value. This proves very useful here, since we are just tricking OGGM into using the new one!\n", + "The partial function allows to create a function that is created by fixing a certain number of arguments of another function. Here we create a new \"class\" which is the same as the default original one, but by setting one parameter to another value. This proves very useful here, since we are just tricking OGGM into using the new one!\n", "\n", "Let's check the outcome:" ] @@ -281,7 +282,7 @@ "source": [ "OK, so let's say we want to do this \"at scale\". We actually had such an assignment recently for the PROTECT SLR project. We were asked to do a number of perturbed simulations with parameters diverging from their default values, for example +1 temp_bias everywhere. But how to do this, knowing that each glacier has a different temp_bias? We can't simply set the bias to 1 everywhere (we need +=1).\n", "\n", - "For this I wrote a \"task\", originally outside of OGGM but that is now (v1.6.4) part of the main codebase. Let's have a look at it:\n", + "For this I wrote a \"task\", originally outside of OGGM but that is now (v1.6.3) part of the main codebase. Let's have a look at it:\n", "\n", "\n", "```python\n", @@ -362,7 +363,7 @@ "sample = sampler.random(n=30)\n", "\n", "def log_scale_value(value, low, high):\n", - " \"\"\"This is to sample multiplicative factors in log space to avoid assymetry (detail, but important).\"\"\"\n", + " \"\"\"This is to sample multiplicative factors in log space to avoid asymetry (detail, but important).\"\"\"\n", " return 2**((np.log2(high) - np.log2(low))*value + np.log2(low))\n", "\n", "sample[:,0] = 4*sample[:,0] - 2 # DDF factor (melt_f): apply change [-2, +2] mm/(°C day)\n", @@ -436,7 +437,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The section above is nice, but works only for a problem setting where we don't ask the mass balance model to match observations. If we were to match obervations, things would be quite different! \n", + "The section above is nice, but works only for a problem setting where we don't ask the mass balance model to match observations. If we were to match observations, things would be quite different!\n", "\n", "To do this, we could define a new task very much like the above, but this time realizing a calibration step before writing its solution down.\n", "\n", diff --git a/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb b/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb index 44675c75..48a03fa2 100644 --- a/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb +++ b/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb @@ -19,7 +19,7 @@ "- How to use matplotlib or [seaborn](https://seaborn.pydata.org/index.html) to visualize the projections and plot different statistical estimates (median, interquartile range, mean, std, ...)\n", "- How to use [HoloViews](https://holoviews.org/) and [Panel](https://panel.holoviz.org/) to visualize the outcome. This part uses advanced plotting capabilities which are not necessary to understand the rest of the notebook.\n", "\n", - "This notebook is intended to explain the postprocessing steps, rather than the OGGM workflow itself. Therefore some code (especially conducting the GCM projection runs) does not have many explanations. If you are more interested in these steps you should check out the notebook [Run OGGM with GCM data](../10minutes/run_with_gcm.ipynb)." + "This notebook is intended to explain the postprocessing steps, rather than the OGGM workflow itself. Therefore, some code (especially conducting the GCM projection runs) does not have many explanations. If you are more interested in these steps you should check out the notebook [Run OGGM with GCM data](../10minutes/run_with_gcm.ipynb)." ] }, { @@ -35,7 +35,7 @@ "id": "3", "metadata": {}, "source": [ - "The first step is to conduct the GCM projection runs. We choose two different glaciers by their rgi_ids and conduct the GCM projections. Again if you do not understand all of the following code you should check out the [Run OGGM with GCM data](../10minutes/run_with_gcm.ipynb) notebook." + "The first step is to conduct the GCM projection runs. We choose two different glaciers by their rgi_ids and conduct the GCM projections. Again if you do not understand all the following code you should check out the [Run OGGM with GCM data](../10minutes/run_with_gcm.ipynb) notebook." ] }, { @@ -136,7 +136,7 @@ " # we will pretend that 'mpi-esm1-2-hr_r1i1p1f1' is missing for `ssp370`\n", " # to later show how to deal with missing values, \n", " # if you want to use this\n", - " # code you can of course remove the \"if\" and just download all GCMs and SSPS \n", + " # code you can of course remove the \"if\" and just download all GCMs and SSPs\n", " if (ssp == 'ssp370') & (GCM=='mpi-esm1-2-hr_r1i1p1f1'):\n", " pass\n", " else:\n", @@ -157,7 +157,7 @@ "id": "10", "metadata": {}, "source": [ - "Here we defined, downloaded and processed all the GCM models and scenarios (here 5 GCMs and three SSPs). We pretend that one GCM is missing for one scenario (this is sometimes the case in for example CMIP5 GCMs, see e.g. this [table](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/gcm_table.html)). We deal with possible missing GCMs for specific SCENARIOs by by including a ```try```/```except``` in the code below and by taking care that the missing values are filled with `NaN` values (and stay `NaN` when doing sums). \n" + "Here we defined, downloaded and processed all the GCM models and scenarios (here 5 GCMs and three SSPs). We pretend that one GCM is missing for one scenario (this is sometimes the case in for example CMIP5 GCMs, see e.g. this [table](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/gcm_table.html)). We deal with possible missing GCMs for specific SCENARIOs by including a ```try```/```except``` in the code below and by taking care that the missing values are filled with `NaN` values (and stay `NaN` when doing sums).\n" ] }, { @@ -200,10 +200,10 @@ " );\n", "\n", " except FileNotFoundError:\n", - " # if a certain scenario is not available for a GCM we land here\n", - " # and we inidcate this by printing a message so the user knows\n", - " # this scenario is missing\n", - " # (in this case of course, the file actually is available, but we just pretend that it is not...)\n", + " # if a certain scenario is not available for a GCM we land here,\n", + " # and we indicate this by printing a message so the user knows\n", + " # this scenario is missing.\n", + " # In this case of course, the file actually is available, but we just pretend that it is not ...\n", " print('No ' + GCM +' run with scenario ' + scen + ' available!')" ] }, @@ -309,7 +309,7 @@ "id": "23", "metadata": {}, "source": [ - "Now we see that ```GCM``` was added to the Dimensions and all Data variables now use this new coordinate. The same can be done with ```SCENARIO```. As a standalone, this is not very useful. But if we add these coordinates to all datasets, it becomes quite handy for merging. Therefore we now open all datasets and add to each one the two coordinates ```GCM``` and ```SCENARIO```:" + "Now we see that ```GCM``` was added to the Dimensions and all Data variables now use this new coordinate. The same can be done with ```SCENARIO```. As a standalone, this is not very useful. But if we add these coordinates to all datasets, it becomes quite handy for merging. Therefore, we now open all datasets and add to each one the two coordinates ```GCM``` and ```SCENARIO```:" ] }, { @@ -346,14 +346,14 @@ " ds_tmp.coords['SCENARIO'].attrs['description'] = 'used scenario (here SSPs)'\n", " ds_tmp = ds_tmp.expand_dims(\"SCENARIO\") # add SSO as a dimension to all Data variables\n", "\n", - " ds_tmp.attrs['creation_date'] = creation_date # also add todays date for info\n", + " ds_tmp.attrs['creation_date'] = creation_date # also add today's date for info\n", " ds_all.append(ds_tmp) # add the dataset with extra coordinates to our final ds_all array\n", "\n", - " except RuntimeError as err: # here we land if an error occured\n", + " except RuntimeError as err: # here we land if an error occurred\n", " if str(err) == 'Found no valid glaciers!': # This is the error message if the GCM, SCENARIO (here ssp) combination does not exist\n", " print(f'No data for GCM {GCM} with SCENARIO {scen} found!') # print a descriptive message\n", " else:\n", - " raise RuntimeError(err) # if an other error occured we just raise it" + " raise RuntimeError(err) # if another error occurred we just raise it" ] }, { @@ -510,7 +510,7 @@ "id": "38", "metadata": {}, "source": [ - "And you see that now we are left with two dimensions ```(SCENARIO, time)```. This means we have calculated the median total volume for all different scenarios (here SSPs) and along the projection period. The mean, standard deviation (std) or percentiles can be also calculated in the same way as the median. Again, bare in mind, that for small sample sizes of GCM ensembles (around 15), which is almost always the case, it is often much more robust to use median and the interquartile range or other percentile ranges. " + "And you see that now we are left with two dimensions ```(SCENARIO, time)```. This means we have calculated the median total volume for all different scenarios (here SSPs) and along the projection period. The mean, standard deviation (std) or percentiles can be also calculated in the same way as the median. Again, bear in mind, that for small sample sizes of GCM ensembles (around 15), which is almost always the case, it is often much more robust to use median and the interquartile range or other percentile ranges." ] }, { @@ -557,7 +557,7 @@ "ds_total_volume_max = ds_total_volume.max(dim='GCM', keep_attrs=True,skipna=True)\n", "\n", "# Think twice if it is appropriate to compute a mean/std over your GCM sample, is it Gaussian distributed?\n", - "# Otherwise use instead median and percentiles or total range\n", + "# Otherwise, use instead median and percentiles or total range\n", "ds_total_volume_mean = ds_total_volume.mean(dim='GCM', keep_attrs=True,\n", " skipna=True)\n", "ds_total_volume_std = ds_total_volume.std(dim='GCM', keep_attrs=True,\n", @@ -577,7 +577,7 @@ "\n", "\n", "fig, axs = plt.subplots(1,2, figsize=(12,6), \n", - " sharey=True # we want to share the y axis betweeen the subplots\n", + " sharey=True # we want to share the y-axis between the subplots\n", " )\n", "for scenario in color_dict.keys():\n", " # get amount of GCMs per Scenario to add it to the legend:\n", @@ -836,7 +836,7 @@ " color = color_dict[ssp] \n", " mean_use = mean.loc[{'SCENARIO': ssp}] # read out the mean of the SSP to plot \n", " std_use = std.loc[{'SCENARIO': ssp}] # read out the std of the SSP to plot\n", - " time = mean.coords['time'] # get the time for the x axis\n", + " time = mean.coords['time'] # get the time for the x-axis\n", " \n", " return (hv.Area((time, # plot std as an area\n", " mean_use + std_use, # upper boundary of the area\n", @@ -889,7 +889,7 @@ " hmap = hv.HoloMap(kdims='Scenarios') # create a HoloMap\n", " mean, std = calculate_total_mean_and_std(ds, variable) # calculate mean and std for all SSPs using our previously defined function\n", " for ssp in all_scenario:\n", - " hmap[ssp] = get_single_curve(mean, std, ssp) # add a curve for each SSP to the HoloMap, using the SSP as a key (when you compare it do a dictonary)\n", + " hmap[ssp] = get_single_curve(mean, std, ssp) # add a curve for each SSP to the HoloMap, using the SSP as a key (when you compare it do a dictionary)\n", " return hmap.overlay().opts(title=variable) # create an overlay of all curves" ] }, diff --git a/notebooks/tutorials/numeric_solvers.ipynb b/notebooks/tutorials/numeric_solvers.ipynb index b18eb73d..b934ccf3 100644 --- a/notebooks/tutorials/numeric_solvers.ipynb +++ b/notebooks/tutorials/numeric_solvers.ipynb @@ -53,7 +53,7 @@ "gdir_eb = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url_eb)[0]\n", "\n", "# load centerline representation\n", - "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_dynamic_solvers_centerliens', reset=True)\n", + "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_dynamic_solvers_centerlines', reset=True)\n", "base_url_cl = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", "gdir_cl = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url_cl)[0]" ] @@ -109,7 +109,7 @@ "outputs": [], "source": [ "# run Flux-Based with elevation bands\n", - "start_time = time.time() # time it for later comparision\n", + "start_time = time.time() # time it for later comparison\n", "tasks.run_random_climate(gdir_eb,\n", " evolution_model=FluxBasedModel,\n", " nyears=300,\n", @@ -130,7 +130,7 @@ "id": "8", "metadata": {}, "source": [ - "Whereas the Semi-Impicit model only works for single trapezoidal flowlines (elevation bands)." + "Whereas the Semi-Implicit model only works for single trapezoidal flowlines (elevation bands)." ] }, { @@ -164,7 +164,7 @@ "outputs": [], "source": [ "# run Semi-Implicit with elevation bands\n", - "start_time = time.time() # time it for later comparision\n", + "start_time = time.time() # time it for later comparison\n", "tasks.run_random_climate(gdir_eb,\n", " evolution_model=SemiImplicitModel,\n", " nyears=300,\n", @@ -315,7 +315,7 @@ "id": "22", "metadata": {}, "source": [ - "In OGGM before v1.6, with the FluxBasedModel, the shape of this downstream line was defined by fitting a parabola to the valley walls. However, for the SemiImplicitModel we had to change the shape to a trapezoidal, eventhough a parabola approximates a mountain valley arguably better. We checked the influence of this change on advancing glaciers and found negligibly small differences in the volume on a regional scale. There might be some differences in the area.\n", + "In OGGM before v1.6, with the FluxBasedModel, the shape of this downstream line was defined by fitting a parabola to the valley walls. However, for the SemiImplicitModel we had to change the shape to a trapezoidal, even though a parabola approximates a mountain valley arguably better. We checked the influence of this change on advancing glaciers and found negligibly small differences in the volume on a regional scale. There might be some differences in the area.\n", "\n", "By default, we use a trapezoidal bed shape for the downstream line:" ] diff --git a/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb b/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb index fed79e03..e5c1b129 100644 --- a/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb +++ b/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb @@ -152,7 +152,7 @@ "id": "9", "metadata": {}, "source": [ - "## Dynamically calibrate and initialise todays glacier state, using flowlines from thickness observations" + "## Dynamically calibrate and initialise today's glacier state, using flowlines from thickness observations" ] }, { @@ -304,7 +304,7 @@ "id": "14", "metadata": {}, "source": [ - "In all three simulations, we observe that both the area and geodetic mass balance align within the target boundaries. Nevertheless, the volume is only coincidentally matched for the consensus estimate. This discrepancy arises because the current deformation parameter was calibrated during the inversion for OGGM default initialization, which incorporates an equilibrium assumption (see [documention](https://docs.oggm.org/en/stable/inversion.html) for more information). However, when defining the glacier bed from thickness observations, it becomes possible/necessary to calibrate the deformation parameter to match the observed volume during initialization. Although there is currently no implemented function for this, the following code should provide an idea of how it can be achieved:" + "In all three simulations, we observe that both the area and geodetic mass balance align within the target boundaries. Nevertheless, the volume is only coincidentally matched for the consensus estimate. This discrepancy arises because the current deformation parameter was calibrated during the inversion for OGGM default initialization, which incorporates an equilibrium assumption (see [documentation](https://docs.oggm.org/en/stable/inversion.html) for more information). However, when defining the glacier bed from thickness observations, it becomes possible/necessary to calibrate the deformation parameter to match the observed volume during initialization. Although there is currently no implemented function for this, the following code should provide an idea of how it can be achieved:" ] }, { @@ -316,7 +316,7 @@ }, "outputs": [], "source": [ - "# create a other flowline for this, for later comparision\n", + "# create another flowline for this, for later comparison\n", "tasks.init_present_time_glacier(gdir, filesuffix='_millan_adapted',\n", " use_binned_thickness_data='millan_ice_thickness')\n", "\n", diff --git a/notebooks/tutorials/oggm_shop.ipynb b/notebooks/tutorials/oggm_shop.ipynb index 696dc9ea..cf5b1b88 100644 --- a/notebooks/tutorials/oggm_shop.ipynb +++ b/notebooks/tutorials/oggm_shop.ipynb @@ -150,7 +150,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/john/OGGM_output`. The size of this directory will depend on how many glaciers you'll simulate!\n", + "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/john/OGGM_output`). The size of this directory will depend on how many glaciers you'll simulate!\n", "\n", "You can create a persistent OGGM working directory at a specific path via `path = utils.mkdir(path)`. **Beware!** If you use `reset=True` in `utils.mkdir`, ALL DATA in this folder will be deleted!" ] @@ -219,7 +219,7 @@ "prepro_border = 10\n", "# Degree of processing level. This is OGGM specific and for the shop 1 is the one you want\n", "from_prepro_level = 1\n", - "# URL of the preprocessed Gdirs\n", + "# URL of the preprocessed gdirs\n", "base_url = 'https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default'\n", "\n", "gdirs = workflow.init_glacier_directories(rgi_ids,\n", @@ -483,7 +483,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This is an example on how to extract velocity fields from the [ITS_live](https://its-live.jpl.nasa.gov/) Regional Glacier and Ice Sheet Surface Velocities Mosaic ([Gardner, A. et al 2019](http://its-live-data.jpl.nasa.gov.s3.amazonaws.com/documentation/ITS_LIVE-Regional-Glacier-and-Ice-Sheet-Surface-Velocities.pdf)) at 120 m resolution and reproject this data to the OGGM-glacier grid. This only works where ITS-live data is available! (not in the Alps).\n", + "This is an example on how to extract velocity fields from the [ITS_live](https://its-live.jpl.nasa.gov/) Regional Glacier and Ice Sheet Surface Velocities Mosaic ([Gardner, A. et al., 2019](https://its-live-data.jpl.nasa.gov.s3.amazonaws.com/documentation/ITS_LIVE-Regional-Glacier-and-Ice-Sheet-Surface-Velocities.pdf)) at 120 m resolution and reproject this data to the OGGM-glacier grid. This only works where ITS-live data is available! (not in the Alps).\n", "\n", "\n", "The data source used is https://its-live.jpl.nasa.gov/#data\n", @@ -500,7 +500,7 @@ }, "outputs": [], "source": [ - "# this will download severals large dataset (2 times a few 100s of MB)\n", + "# this will download several large datasets (2 times a few 100s of MB)\n", "from oggm.shop import its_live, rgitopo\n", "workflow.execute_entity_task(rgitopo.select_dem_from_dir, gdirs, dem_source='COPDEM90', keep_dem_folders=True);\n", "workflow.execute_entity_task(tasks.glacier_masks, gdirs);\n", @@ -515,7 +515,7 @@ "\n", "The velocity components (**vx**, **vy**) are added to the `gridded_data` nc file stored on each glacier directory.\n", "\n", - "According to the [ITS_LIVE documentation](http://its-live-data.jpl.nasa.gov.s3.amazonaws.com/documentation/ITS_LIVE-Regional-Glacier-and-Ice-Sheet-Surface-Velocities.pdf) velocities are given in ground units (i.e. absolute velocities). We then use bilinear interpolation to reproject the velocities to the local glacier map by re-projecting the vector distances.\n", + "According to the [ITS_LIVE documentation](https://its-live-data.jpl.nasa.gov.s3.amazonaws.com/documentation/ITS_LIVE-Regional-Glacier-and-Ice-Sheet-Surface-Velocities.pdf) velocities are given in ground units (i.e. absolute velocities). We then use bilinear interpolation to reproject the velocities to the local glacier map by re-projecting the vector distances.\n", "\n", "By specifying `add_error=True`, we also reproject and scale the error for each component (**evx**, **evy**).\n", "\n", @@ -720,7 +720,7 @@ "source": [ "The `ds.glacier_mask == 1` command will remove the data outside of the glacier outline.\n", "\n", - "In addition, for Columbia glacier the dataset has a few spurious values at the calving front, which is now well inside the RGI outlines. Lets just filter them for a nicer plot:" + "In addition, for Columbia glacier the dataset has a few spurious values at the calving front, which is now well inside the RGI outlines. Let's just filter them for a nicer plot:" ] }, { diff --git a/notebooks/tutorials/plot_mass_balance.ipynb b/notebooks/tutorials/plot_mass_balance.ipynb index f0de095a..1b895869 100644 --- a/notebooks/tutorials/plot_mass_balance.ipynb +++ b/notebooks/tutorials/plot_mass_balance.ipynb @@ -425,7 +425,7 @@ "metadata": {}, "source": [ "A few comments:\n", - "- the systematic difference between WGMS and OGGM is due to the calibration of the model, which is done with fixed geometry (the bias is zero over the entire calibration period but changes sign over the 50 year period, see plot above)\n", + "- the systematic difference between WGMS and OGGM is due to the calibration of the model, which is done with fixed geometry (the bias is zero over the entire calibration period but changes sign over the 50-year period, see plot above)\n", "- the difference between OGGM dynamics and fixed geometry (so small that they are barely visible for this short time period) is due to:\n", " - the changes in geometry during the simulation time (i.e. this difference grows with time)\n", " - melt at the tongue which might be larger than the glacier thickness is not accounted for in the fixed geometry data (these are assumed to be small)\n", @@ -448,7 +448,7 @@ "source": [ "At time of writing, we do not store the mass-balance profiles during a transient run, although we could (and [should](https://github.com/OGGM/oggm/issues/1022)). To get the profiles with a dynamically changing geometry, there is no other way than either get the mass-balance during the simulation, or to fetch the model output and re-compute the mass-balance retroactively. \n", "\n", - "**Note: the mass-balance profiles themselves of course are not really affected by the changing geometry - what changes from one year to another is the altitude at which the model will compute the mass-balance.** In other terms, if you are interested in the mass-balance profiles you are better of to use the fixed geometries approaches explained at the beginning of the notebook." + "**Note: the mass-balance profiles themselves of course are not really affected by the changing geometry - what changes from one year to another is the altitude at which the model will compute the mass-balance.** In other terms, if you are interested in the mass-balance profiles you are better off to use the fixed geometries approaches explained at the beginning of the notebook." ] }, { @@ -506,7 +506,7 @@ "source": [ "## Equilibrium Line Altitude \n", "\n", - "The second part of this notebook shows how you can compute the Equilbrium Line Altitude (ELA, the altitude at which the mass-balance is equal to 0) with OGGM.\n", + "The second part of this notebook shows how you can compute the Equilibrium Line Altitude (ELA, the altitude at which the mass-balance is equal to 0) with OGGM.\n", "\n", "As the ELA in OGGM only depends on the [mass balance model](https://docs.oggm.org/en/stable/mass-balance.html) itself (not on glacier geometry), there is no need to do a full model run to collect these values. This is also the reason why the ELA is no longer a part of the model run output, but is instead a diagnostic variable that can be computed separately since OGGM v1.6." ] @@ -577,7 +577,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "How would the ELA look like if the climate is 1° warmer? Lets have a look at one of the glaciers:" + "How would the ELA look like if the climate is 1° warmer? Let's have a look at one of the glaciers:" ] }, { @@ -632,7 +632,7 @@ "source": [ "# we only look at the first glacier\n", "plt.plot(ela_df[[ela_df.columns[0]]].mean(axis=1), label='default climate')\n", - "plt.plot(ela_df_t1.mean(axis=1), label=('default climate +1℃'))\n", + "plt.plot(ela_df_t1.mean(axis=1), label='default climate +1℃')\n", "plt.xlabel('year CE'); plt.ylabel('ELA [m]'); plt.legend();" ] }, @@ -642,7 +642,7 @@ "source": [ "By using the `precipitation_factor` keyword, you can change the precipitation. Feel free to try that by yourself! At the moment, `gdirs[0].read_json('mb_calib')['prcp_fac']` is applied. \n", "\n", - "Lets look at a longer timeseries, for one glacier only: " + "Let's look at a longer timeseries, for one glacier only:" ] }, { @@ -660,7 +660,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The ELA can have a high year-to-year variability. Therefore we plot in addition to the regular ELA timeseries, the 5-year moving average. " + "The ELA can have a high year-to-year variability. Therefore, we plot in addition to the regular ELA timeseries, the 5-year moving average." ] }, { @@ -678,7 +678,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In case you're only intrested in specific years there is an option to just compute the ELA in those years. There is actually no need to save this data. Therefore we now just compute the ELA. " + "In case you're only interested in specific years there is an option to just compute the ELA in those years. There is actually no need to save this data. Therefore, we now just compute the ELA." ] }, { @@ -696,7 +696,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "See in this case you can use any year as long as you have climate available for that year. However for plotting purposes it might be worth to sort the data, otherwise the following happens ;)" + "See in this case you can use any year as long as you have climate available for that year. However, for plotting purposes it might be worth to sort the data, otherwise the following happens ;)" ] }, { @@ -722,7 +722,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "By now we have addressed most of the keyword agruments of the compile_ela and compute_ela functions. To use different climate than the default climate, you can make use of the `climate_filename` and `climate_input_filesuffix` keywords. In case you're not familiar with those yet, please check out the [run_with_gcm](../10minutes/run_with_gcm.ipynb) notebook." + "By now we have addressed most of the keyword arguments of the compile_ela and compute_ela functions. To use different climate than the default climate, you can make use of the `climate_filename` and `climate_input_filesuffix` keywords. In case you're not familiar with those yet, please check out the [run_with_gcm](../10minutes/run_with_gcm.ipynb) notebook." ] }, { @@ -736,7 +736,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Like the ELA, the Accumulation Area Ratio (AAR) is a diagostic variable. The AAR is currently not an output \n", + "Like the ELA, the Accumulation Area Ratio (AAR) is a diagnostic variable. The AAR is currently not an output\n", "variable in OGGM, but it can easily be computed. It is even a part of the [glacier simulator](https://bokeh.oggm.org/simulator/app) and its [documentation](https://edu.oggm.org/en/latest/simulator.html#aar-accumulation-area-ratio). Below we give an example of how it can be computed after a model run. From the ELA \n", "computation, we already know the height where the mass balance is equal to zero. Now we need to compute the area \n", "of the glacier that is located above that line. Let's start with a **static geometry** glacier with multiple flowlines." @@ -746,7 +746,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We first search the points along the flowlines that are above the ELA in a given year. Lets use Hintereisferner. \n", + "We first search the points along the flowlines that are above the ELA in a given year. Let's use Hintereisferner.\n", "\n", "We use a `True`, `False` array saying which part is above the ELA and sum the corresponding area at each timestep: " ] @@ -800,7 +800,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now that we have looked at the static case, let's to the same for glaciers with changing surface height:" + "Now that we have looked at the static case, let's do the same for glaciers with changing surface height:" ] }, { @@ -813,7 +813,7 @@ "tot_area = 0\n", "tot_area_above = 0\n", "\n", - "ela_ds = xr.DataArray(ela_df[rgi_id], dims=('time'))\n", + "ela_ds = xr.DataArray(ela_df[rgi_id], dims='time')\n", "\n", "for fn in np.arange(len(fls)):\n", " with xr.open_dataset(gdir.get_filepath('fl_diagnostics'), group='fl_' + str(fn)) as ds:\n", @@ -841,7 +841,7 @@ "metadata": {}, "source": [ "You can see in the plot that the difference between the AAR calculated from the dynamic glacier and the one of the \n", - "static glacier is quite similar. Over time you can expect that this difference becomes larger. Feel free to play \n", + "static glacier is quite similar. Over time, you can expect that this difference becomes larger. Feel free to play\n", "around with that. " ] }, diff --git a/notebooks/tutorials/preprocessing_errors.ipynb b/notebooks/tutorials/preprocessing_errors.ipynb index d7de409c..c96687a0 100644 --- a/notebooks/tutorials/preprocessing_errors.ipynb +++ b/notebooks/tutorials/preprocessing_errors.ipynb @@ -197,9 +197,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "*much less errors occur when using elevation band flowlines than when using centerlines!*\n", + "*much fewer errors occur when using elevation band flowlines than when using centerlines!*\n", "\n", - "-> Reason: less *glacier_mask* errors! " + "-> Reason: fewer *glacier_mask* errors!" ] }, { @@ -208,7 +208,7 @@ "metadata": {}, "outputs": [], "source": [ - "# you can check out the different error messages with that\n", + "# you can check out the different error messages with that,\n", "# but we only output the first 20 here\n", "df_elev.error_msg.dropna().unique()[:20]" ] @@ -274,7 +274,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "*more than three times less errors from the climate tasks occur when using ERA5 than when using CRU* !" + "*more than three times fewer errors from the climate tasks occur when using ERA5 than when using CRU* !" ] }, { @@ -312,7 +312,7 @@ "source": [ "## What's next?\n", "\n", - "- A more detailed analysis about the type, amount and relative failing glacier area (in total and per RGI region) can be found in [this error analysis jupyter notebook](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~lschuster/error_analysis/error_analysis_v1.ipynb?flush_cache=true). It also includes an error analysis for different [MB calibration and climate quality check methods](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~lschuster/error_analysis/error_analysis_v1.ipynb?flush_cache=true#Analysis-for-Level-5-pre-processing-directories!).\n", + "- A more detailed analysis of the type, amount and relative failing glacier area (in total and per RGI region) can be found in [this error analysis jupyter notebook](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~lschuster/error_analysis/error_analysis_v1.ipynb?flush_cache=true). It also includes an error analysis for different [MB calibration and climate quality check methods](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~lschuster/error_analysis/error_analysis_v1.ipynb?flush_cache=true#Analysis-for-Level-5-pre-processing-directories!).\n", "- If you are interested in how the “common” non-failing glaciers differ in terms of historical volume change, total mass change and specific mass balance between different pre-processed glacier directories, you can check out [this jupyter notebook](https://nbviewer.org/urls/cluster.klima.uni-bremen.de/~lschuster/error_analysis/working_glacier_gdirs_comparison.ipynb?flush_cache=true).\n", "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" diff --git a/notebooks/tutorials/rgitopo_rgi6.ipynb b/notebooks/tutorials/rgitopo_rgi6.ipynb index 34a6df42..92984436 100644 --- a/notebooks/tutorials/rgitopo_rgi6.ipynb +++ b/notebooks/tutorials/rgitopo_rgi6.ipynb @@ -47,7 +47,7 @@ }, "outputs": [], "source": [ - "# The RGI Id of the glaciers you want to look for\n", + "# The RGI-id of the glaciers you want to look for\n", "# Use the original shapefiles or the GLIMS viewer to check for the ID: https://www.glims.org/maps/glims\n", "rgi_id = 'RGI60-11.00897'\n", "\n", @@ -601,7 +601,7 @@ "l1, l2 = (utils.nicenumber(df.min().min(), binsize=50, lower=True), \n", " utils.nicenumber(df.max().max(), binsize=50, lower=False))\n", "\n", - "def plot_unity(xdata, ydata, **kwargs):\n", + "def plot_unity():\n", " points = np.linspace(l1, l2, 100)\n", " plt.gca().plot(points, points, color='k', marker=None,\n", " linestyle=':', linewidth=3.0)\n", diff --git a/notebooks/tutorials/rgitopo_rgi7.ipynb b/notebooks/tutorials/rgitopo_rgi7.ipynb index f102c525..9204dbc4 100644 --- a/notebooks/tutorials/rgitopo_rgi7.ipynb +++ b/notebooks/tutorials/rgitopo_rgi7.ipynb @@ -47,7 +47,7 @@ }, "outputs": [], "source": [ - "# The RGI Id of the glaciers you want to look for\n", + "# The RGI-id of the glaciers you want to look for\n", "# Use the original shapefiles or the GLIMS viewer to check for the ID: https://www.glims.org/maps/glims\n", "rgi_id = 'RGI2000-v7.0-G-01-06486' # Denali\n", "\n", @@ -613,7 +613,7 @@ "l1, l2 = (utils.nicenumber(df.min().min(), binsize=50, lower=True), \n", " utils.nicenumber(df.max().max(), binsize=50, lower=False))\n", "\n", - "def plot_unity(xdata, ydata, **kwargs):\n", + "def plot_unity():\n", " points = np.linspace(l1, l2, 100)\n", " plt.gca().plot(points, points, color='k', marker=None,\n", " linestyle=':', linewidth=3.0)\n", diff --git a/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb b/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb index 4835542d..0b13a047 100644 --- a/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb +++ b/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb @@ -74,7 +74,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here the paths to the CESM-LME files are set. (The demo files that are being used in this example don't contain the whole last millennium, neither do they have the global coverage that they original files have. These demo files have been made for test purposes and to reduce the time it takes to run the example. If you use the demo files for a glacier outside the domain, you won't get an error. Instead the climate of the nearest point to the glacier that is available in the demo files will be used, which could be thousands of kilometers away.)" + "Here the paths to the CESM-LME files are set. (The demo files that are being used in this example don't contain the whole last millennium, neither do they have the global coverage that they original files have. These demo files have been made for test purposes and to reduce the time it takes to run the example. If you use the demo files for a glacier outside the domain, you won't get an error. Instead, the climate of the nearest point to the glacier that is available in the demo files will be used, which could be thousands of kilometers away.)" ] }, { diff --git a/notebooks/tutorials/store_and_compress_glacierdirs.ipynb b/notebooks/tutorials/store_and_compress_glacierdirs.ipynb index 252c646a..1096df39 100644 --- a/notebooks/tutorials/store_and_compress_glacierdirs.ipynb +++ b/notebooks/tutorials/store_and_compress_glacierdirs.ipynb @@ -84,7 +84,7 @@ "metadata": {}, "source": [ "Note that in OGGM v1.6 you have to explicitly indicate the url from where you want to start from, \n", - "we will use here a preprocessed directory with elevation band flowlines and used W5E5 for calibration. In the future, [other preprocessed directories might exist](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/) and you can use them by changing the base_url. " + "we will use here a preprocessed directory with elevation band flowlines and used W5E5 for calibration. In the future, [other preprocessed directories might exist](https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/), and you can use them by changing the base_url." ] }, { @@ -102,7 +102,7 @@ }, "outputs": [], "source": [ - "def file_tree_print(prepro_dir=False):\n", + "def file_tree_print():\n", " # Just a utility function to show the dir structure and selected files\n", " print(\"cfg.PATHS['working_dir']/\")\n", " tab = ' '\n", @@ -267,7 +267,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Most of the time, you will actually want to delete the orginal directories because they are not needed for this run anymore:" + "Most of the time, you will actually want to delete the original directories because they are not needed for this run anymore:" ] }, { @@ -288,7 +288,7 @@ "source": [ "Now the original directories are gone, and the `gdirs` objects are useless (attempting to do anything with them will lead to an error).\n", "\n", - "Since they are already available in the correct file structure, however, OGGM will know how to reconstruct them from the tar files if asked to:" + "Since they are already available in the correct file structure, however, OGGM will know how to reconstruct them from the tar files if OGGM is asked to:" ] }, { @@ -356,7 +356,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now, the glacier directories are bundled in a file at a higher level even. This is even more convenient to move around (less files), but is not a mandatory step. The nice part about this bundling is that you can still select individual glaciers, as we will see in the next section. In the meantime, you can do: " + "Now, the glacier directories are bundled in a file at a higher level even. This is even more convenient to move around (fewer files), but is not a mandatory step. The nice part about this bundling is that you can still select individual glaciers, as we will see in the next section. In the meantime, you can do:" ] }, { @@ -405,7 +405,7 @@ "if os.path.exists(PREPRO_DIR):\n", " shutil.rmtree(PREPRO_DIR)\n", "\n", - "# Lets start from a clean state\n", + "# Let's start from a clean state\n", "# Beware! If you use `reset=True` in `utils.mkdir`, ALL DATA in this folder will be deleted! Use with caution!\n", "utils.mkdir(WORKING_DIR, reset=True)\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url)\n", @@ -433,7 +433,7 @@ }, "outputs": [], "source": [ - "# Lets start from a clean state\n", + "# Let's start from a clean state\n", "utils.mkdir(WORKING_DIR, reset=True)\n", "# This needs https://github.com/OGGM/oggm/pull/1158 to work\n", "# It uses the files you prepared beforehand to start the dirs\n", diff --git a/notebooks/tutorials/use_your_own_inventory.ipynb b/notebooks/tutorials/use_your_own_inventory.ipynb index 14898f05..9fef23d0 100644 --- a/notebooks/tutorials/use_your_own_inventory.ipynb +++ b/notebooks/tutorials/use_your_own_inventory.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Using your our own glacier inventory with OGGM" + "# Using your own glacier inventory with OGGM" ] }, { @@ -91,7 +91,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Shapefiles are best read an manipulated with [geopandas](http://geopandas.org/) in python (see also our [working_with_rgi](../tutorials/working_with_rgi.ipynb) tutorial):" + "Shapefiles are best read and manipulated with [geopandas](https://geopandas.org/) in python (see also our [working_with_rgi](../tutorials/working_with_rgi.ipynb) tutorial):" ] }, { @@ -372,7 +372,7 @@ "# This is important for centerlines - if you have them\n", "# cfg.set_intersects_db(hef_intersects_path)\n", "\n", - "# This is to avoid a download in the tutorial, you dont' need do this at home\n", + "# This is to avoid a download in the tutorial, you don't need to do this at home\n", "cfg.PATHS['dem_file'] = utils.get_demo_file('hef_srtm.tif')\n", "\n", "# This is important again - standard OGGM \n", @@ -489,7 +489,7 @@ "metadata": {}, "source": [ "The upper glacier map is a zoom version of the plot below. \n", - "They share the same glaciers terminus. Therefore, to estimate a calving flux for these glaciers we need them connected. " + "They share the same glacier's terminus. Therefore, to estimate a calving flux for these glaciers we need them connected." ] }, { @@ -606,7 +606,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "For simplicity, we do not compute the intersects in this case: **however, we recommend you do do so (see above). In all cases, do not use the intersects provided automatically with OGGM when using custom inventories, as they are likely to be wrong.**" + "For simplicity, we do not compute the intersects in this case: **however, we recommend you do so (see above). In all cases, do not use the intersects provided automatically with OGGM when using custom inventories, as they are likely to be wrong.**" ] }, { @@ -634,7 +634,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here we are not able to use the [Pre-processed directories](https://docs.oggm.org/en/stable/input-data.html#pre-processed-directories) and the respective [Processing levels](https://docs.oggm.org/en/stable/input-data.html#preprodir) that OGGM provides for a easy run set up. We can't use this workflow simply because we have a different beginning than OGGM, we have a different RGI! We just need to type more and run all the model task one by one:" + "Here we are not able to use the [Pre-processed directories](https://docs.oggm.org/en/stable/input-data.html#pre-processed-directories) and the respective [Processing levels](https://docs.oggm.org/en/stable/input-data.html#preprodir) that OGGM provides for an easy run set up. We can't use this workflow simply because we have a different beginning than OGGM, we have a different RGI! We just need to type more and run all the model task one by one:" ] }, { @@ -703,7 +703,7 @@ "source": [ "OGGM (since version 1.5.3) now offers a function (`utils.cook_rgidf()`) to make it easier of using a non-RGI glacier inventory in OGGM. Now, let's use a non-RGI glacier inventory from the second Chinese glacier inventory (CGI2, https://doi.org/10.3189/2015JoG14J209) to show how it works.\n", "\n", - "**New in OGGM 1.6.1**: If you using outlines consisting of multi polygons and plan to use \"elevation band\" flowlines (see [10 minutes to... \"elevation band\" and \"centerline\" flowlines](../tutorials/elevation_bands_vs_centerlines.ipynb)) you can keep the complete multi polygon area for your simulations by setting ```cfg.PARAMS['keep_multipolygon_outlines'] = True```. That can be useful when working with local glacier inventories with multiple outlines (e.g. older outline single polygon but newer outline multi polygon for the same glacier)." + "**New in OGGM 1.6.1**: If you are using outlines consisting of multi polygons and plan to use \"elevation band\" flowlines (see [10 minutes to... \"elevation band\" and \"centerline\" flowlines](../tutorials/elevation_bands_vs_centerlines.ipynb)) you can keep the complete multi polygon area for your simulations by setting ```cfg.PARAMS['keep_multipolygon_outlines'] = True```. That can be useful when working with local glacier inventories with multiple outlines (e.g. older outline single polygon but newer outline multi polygon for the same glacier)." ] }, { @@ -740,7 +740,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this case, we fake all of the columns values except for `geometry`. With this `rgidf_simple`, we can handle most of the OGGM procedure after set `cfg.PARAMS['use_rgi_area'] = False`. Let's have a try:" + "In this case, we fake all the columns values except for `geometry`. With this `rgidf_simple`, we can handle most of the OGGM procedure after set `cfg.PARAMS['use_rgi_area'] = False`. Let's have a try:" ] }, { @@ -807,8 +807,8 @@ "source": [ "### Note:\n", "\n", - "Despite that `cook_rgidf()` can handle most of the cases for OGGM, there are some limitations. Here, we try to point out some of cases in which the `rgidf` sourced from `cook_rgidf()` might get you in trouble:\n", - "- in `cook_rgidf()`, we assign the glacier form with '0' (Glacier) for all of the glaciers in the original data. OGGM assigns different parameters for glaciers (form '0') and ice caps (form '1').\n", + "Despite that `cook_rgidf()` can handle most of the cases for OGGM, there are some limitations. Here, we try to point out some of the cases in which the `rgidf` sourced from `cook_rgidf()` might get you in trouble:\n", + "- in `cook_rgidf()`, we assign the glacier form with '0' (Glacier) for all the glaciers in the original data. OGGM assigns different parameters for glaciers (form '0') and ice caps (form '1').\n", "- `termtype` was also assign as '0' which means 'land-terminating'. Here again, OGGM treats 'Marine-terminating' glaciers differently (see ['Frontal ablation'](https://docs.oggm.org/en/stable/frontal-ablation.html)). \n", "\n", "For these kinds of attribution, there is nothing we can do automatically. Users need to assign the right values according the actual condition of their glaciers, if the attribution is important to their use cases." diff --git a/notebooks/tutorials/where_are_the_flowlines.ipynb b/notebooks/tutorials/where_are_the_flowlines.ipynb index 539e270a..7f4cc4d9 100644 --- a/notebooks/tutorials/where_are_the_flowlines.ipynb +++ b/notebooks/tutorials/where_are_the_flowlines.ipynb @@ -477,22 +477,6 @@ "The first method to locate the terminus uses fancy pandas functions but may be more cryptic for less experienced pandas users:" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "f." - ] - }, { "cell_type": "code", "execution_count": null, @@ -518,7 +502,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Plot them on a google image - you need an API key for this\n", + "# Plot them on a Google Maps image - you need an API key for this\n", "# api_key = ''\n", "# from motionless import DecoratedMap, LatLonMarker\n", "# dmap = DecoratedMap(maptype='satellite', key=api_key)\n", diff --git a/notebooks/tutorials/working_with_rgi.ipynb b/notebooks/tutorials/working_with_rgi.ipynb index 3f21257f..e897cb25 100644 --- a/notebooks/tutorials/working_with_rgi.ipynb +++ b/notebooks/tutorials/working_with_rgi.ipynb @@ -83,7 +83,7 @@ "metadata": {}, "source": [ "![rgi-map](https://www.researchgate.net/profile/Tobias_Bolch/publication/264125572/figure/fig1/AS:295867740377088@1447551774164/First-order-regions-of-the-RGI-with-glaciers-shown-in-red-Region-numbers-are-those-of.png)\n", - "*Source: [the RGI consortium](http://www.glims.org/RGI/randolph60.html)*" + "*Source: [the RGI consortium](https://www.glims.org/RGI/randolph60.html)*" ] }, { @@ -101,7 +101,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The RGI region files are [shapefiles](https://en.wikipedia.org/wiki/Shapefile), a vector format commonly used in GIS applications. The library of choice to read shapefiles in python is [geopandas](http://geopandas.org/):" + "The RGI region files are [shapefiles](https://en.wikipedia.org/wiki/Shapefile), a vector format commonly used in GIS applications. The library of choice to read shapefiles in python is [geopandas](https://geopandas.org/):" ] }, { diff --git a/notebooks/welcome.ipynb b/notebooks/welcome.ipynb index 7677680e..91ed0741 100644 --- a/notebooks/welcome.ipynb +++ b/notebooks/welcome.ipynb @@ -106,7 +106,7 @@ "## OGGM shop and additional data\n", "\n", "- [OGGM-Shop and Glacier Directories in OGGM](tutorials/oggm_shop.ipynb)\n", - "- [Using your our own glacier inventory with OGGM](tutorials/use_your_own_inventory.ipynb)\n", + "- [Using your own glacier inventory with OGGM](tutorials/use_your_own_inventory.ipynb)\n", "- [Ingest gridded products such as ice velocity into OGGM](tutorials/ingest_gridded_data_on_flowlines.ipynb)\n", "- [Create local topography maps from different DEM sources with OGGM](tutorials/dem_sources.ipynb)\n", "- [Compare different DEMs for individual glaciers: RGI-TOPO for RGI v6.0](tutorials/rgitopo_rgi6.ipynb)\n", From d821ada120954a4f6eb33c6149948f73882f9798 Mon Sep 17 00:00:00 2001 From: lilianschuster Date: Thu, 26 Feb 2026 19:11:02 +0100 Subject: [PATCH 2/2] updated gdirs to 2025.6 --- _config.yml | 6 +- notebooks/10minutes/machine_learning.ipynb | 95 ++++++++++++++++--- .../10minutes/preprocessed_directories.ipynb | 2 +- .../construction/area_length_filter.ipynb | 16 ++-- .../tutorials/building_the_prepro_gdirs.ipynb | 27 ++++-- .../tutorials/centerlines_to_shape.ipynb | 19 ++-- notebooks/tutorials/deal_with_errors.ipynb | 4 +- notebooks/tutorials/dem_sources.ipynb | 21 ++-- notebooks/tutorials/distribute_flowline.ipynb | 36 ++++--- notebooks/tutorials/dynamical_spinup.ipynb | 71 +++++++------- .../elevation_bands_vs_centerlines.ipynb | 14 +-- .../tutorials/full_prepro_workflow.ipynb | 33 ++++--- .../ingest_gridded_data_on_flowlines.ipynb | 21 +++- notebooks/tutorials/inversion.ipynb | 20 ++-- .../tutorials/kcalving_parameterization.ipynb | 31 +++--- .../tutorials/massbalance_calibration.ipynb | 2 +- .../tutorials/massbalance_global_params.ipynb | 76 +++++++-------- .../tutorials/massbalance_perturbation.ipynb | 4 +- .../merge_gcm_runs_and_visualize.ipynb | 6 +- notebooks/tutorials/numeric_solvers.ipynb | 16 +++- ...served_thickness_with_dynamic_spinup.ipynb | 58 +++++++++-- notebooks/tutorials/oggm_shop.ipynb | 18 ++-- notebooks/tutorials/plot_mass_balance.ipynb | 33 ++++--- .../tutorials/preprocessing_errors.ipynb | 2 +- notebooks/tutorials/rgitopo_rgi6.ipynb | 14 +-- notebooks/tutorials/rgitopo_rgi7.ipynb | 8 +- .../run_with_a_spinup_and_gcm_data.ipynb | 5 +- .../store_and_compress_glacierdirs.ipynb | 9 +- .../tutorials/use_your_own_inventory.ipynb | 17 ++-- .../tutorials/where_are_the_flowlines.ipynb | 14 +-- 30 files changed, 445 insertions(+), 253 deletions(-) diff --git a/_config.yml b/_config.yml index 9008050f..ce67b2a0 100755 --- a/_config.yml +++ b/_config.yml @@ -23,11 +23,7 @@ html: use_repository_button: true use_issues_button: true use_edit_page_button: true - announcement: | -

- 🚧 Scheduled maintenance: the OGGM cluster will be offline April 27 (evening CEST) – April 30 (morning CEST) 2025. - Learn more. -

+ announcement: extra_footer: |

These notebooks are licensed under a BSD-3-Clause license. diff --git a/notebooks/10minutes/machine_learning.ipynb b/notebooks/10minutes/machine_learning.ipynb index fe6a0408..0e7461b4 100644 --- a/notebooks/10minutes/machine_learning.ipynb +++ b/notebooks/10minutes/machine_learning.ipynb @@ -70,8 +70,9 @@ "cfg.PARAMS['use_multiprocessing'] = False\n", "# Local working directory (where OGGM will write its output)\n", "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_Toy_Thickness_Model')\n", - "# We use the preprocessed directories with additional data in it: \"W5E5_w_data\" \n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_w_data/' ### TODO: update to 2025.6\n", + "# We use the preprocessed directories with additional data in it: \"W5E5_w_data\"\n", + "# the old 2023.3 preprocessed gdir had \"0\"-values instead of NaN-values for the Millan2022 data. This was corrected in the 2025.6 gdirs.\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands_w_data/W5E5/per_glacier/'\n", "gdirs = workflow.init_glacier_directories(['RGI60-01.16195'], from_prepro_level=3, prepro_base_url=base_url, prepro_border=10)" ] }, @@ -116,6 +117,13 @@ "ds" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a few NaN values, we will remove those later for the machine learning part." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -451,7 +459,9 @@ "coords = np.array([p.xy for p in df.geometry]).squeeze()\n", "df['lon'] = coords[:, 0]\n", "df['lat'] = coords[:, 1]\n", - "df = df[['lon', 'lat', 'thick']]" + "df = df[['lon', 'lat', 'thick']]\n", + "# check that there are no NaN values in the data (otherwise, we would remove them)\n", + "assert np.any(~df.isna())" ] }, { @@ -561,6 +571,32 @@ " df[vn] = ds[vn].interp(x=('z', df.x), y=('z', df.y))" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# there are a few rows without millan velocities\n", + "df[df.isna().any(axis=1)]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**We remove those rows with NaN values (inside of Millan velocities) to have a fair comparison**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "df = df.dropna()" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -611,7 +647,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There are so many points that much of the information obtained by OGGM is interpolated and therefore not biring much new information to a statistical model. A way to deal with this is to aggregate all the measurement points per grid point and to average them. Let's do this: " + "There are so many points that much of the information obtained by OGGM is interpolated and is therefore not bringing much new information to a statistical model. A way to deal with this is to aggregate all the measurement points per grid point and to average them. Let's do this:" ] }, { @@ -879,7 +915,9 @@ "# Generate our dataset\n", "pred_data = pd.DataFrame()\n", "for vn in data.columns:\n", - " pred_data[vn] = ds[vn].data[ds.glacier_mask == 1]\n", + " # only take glacier gridpoints\n", + " # and only take those \"gridpoints\" where millan velocities is not NaN\n", + " pred_data[vn] = ds[vn].data[(ds.glacier_mask == 1) & (~np.isnan(ds.millan_v))]\n", "\n", "# Normalize using the same normalization constants\n", "pred_data = (pred_data - data_mean) / data_std\n", @@ -899,7 +937,7 @@ "source": [ "# Back to 2d and in xarray\n", "var = ds[vn].data * np.nan\n", - "var[ds.glacier_mask == 1] = pred_data['thick']\n", + "var[(ds.glacier_mask == 1) & (~np.isnan(ds.millan_v))] = pred_data['thick']\n", "ds['linear_model_thick'] = (('y', 'x'), var)\n", "ds['linear_model_thick'].attrs['description'] = 'Predicted thickness'\n", "ds['linear_model_thick'].attrs['units'] = 'm'\n", @@ -989,17 +1027,34 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "tags": [] - }, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], "source": [ - "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10))\n", - "ds['linear_model_thick'].plot(ax=ax1); ax1.set_title('Statistical model')\n", - "ds['distributed_thickness'].plot(ax=ax2); ax2.set_title('OGGM')\n", - "ds['millan_ice_thickness'].where(ds.glacier_mask).plot(ax=ax3); ax3.set_title('Millan 2022')\n", - "ds['consensus_ice_thickness'].plot(ax=ax4); ax4.set_title('Farinotti 2019')\n", - "plt.tight_layout();" + "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(\n", + " 2, 2, figsize=(12, 10), constrained_layout=True\n", + ")\n", + "vmin= 0\n", + "vmax = ds[['linear_model_thick','distributed_thickness','millan_ice_thickness','consensus_ice_thickness']].to_dataframe().max().max()*1.05\n", + "\n", + "\n", + "im1 = ds['linear_model_thick'].plot(ax=ax1, vmin=vmin, vmax=vmax, add_colorbar=False)\n", + "ds['distributed_thickness'].plot(ax=ax2, vmin=vmin, vmax=vmax, add_colorbar=False)\n", + "ds['millan_ice_thickness'].where(ds.glacier_mask).plot(ax=ax3, vmin=vmin, vmax=vmax, add_colorbar=False)\n", + "ds['consensus_ice_thickness'].plot(ax=ax4, vmin=vmin, vmax=vmax, add_colorbar=False)\n", + "\n", + "ax1.set_title('Statistical model')\n", + "ax2.set_title('OGGM')\n", + "ax3.set_title('Millan 2022')\n", + "ax4.set_title('Farinotti 2019')\n", + "cbar = f.colorbar(im1, ax=[ax1, ax2, ax3, ax4], shrink=0.8, location='right', pad=0.02)\n", + "cbar.set_label(\"Ice thickness (m)\");\n" ] }, { @@ -1010,6 +1065,9 @@ }, "outputs": [], "source": [ + "## check if there are no NaN values\n", + "assert np.any(~np.isnan(df_agg))\n", + "###\n", "f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 10))\n", "df_agg.plot.scatter(x='thick', y='linear_model_thick', ax=ax1)\n", "ax1.set_xlim([-25, 220]); ax1.set_ylim([-25, 220]); ax1.set_title('Statistical model')\n", @@ -1020,6 +1078,13 @@ "df_agg.plot.scatter(x='thick', y='consensus_thick', ax=ax4)\n", "ax4.set_xlim([-25, 220]); ax4.set_ylim([-25, 220]); ax4.set_title('Farinotti 2019');" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/10minutes/preprocessed_directories.ipynb b/notebooks/10minutes/preprocessed_directories.ipynb index 2352bcc8..5b77c636 100644 --- a/notebooks/10minutes/preprocessed_directories.ipynb +++ b/notebooks/10minutes/preprocessed_directories.ipynb @@ -250,7 +250,7 @@ " rgi_ids, # which glaciers?\n", " prepro_base_url=DEFAULT_BASE_URL, # where to fetch the data?\n", " from_prepro_level=4, # what kind of data? \n", - " prepro_border=160 # how big of a map? ## TODO update back to 80 if made available\n", + " prepro_border=80 # how big of a map?\n", ")" ] }, diff --git a/notebooks/construction/area_length_filter.ipynb b/notebooks/construction/area_length_filter.ipynb index 7e76d3ea..3770d0e5 100644 --- a/notebooks/construction/area_length_filter.ipynb +++ b/notebooks/construction/area_length_filter.ipynb @@ -64,7 +64,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We take the Kesselwandferner in the Austrian Alps:" + "We take the Hintereisferner in the Austrian Alps:" ] }, { @@ -73,7 +73,7 @@ "metadata": {}, "outputs": [], "source": [ - "rgi_ids = ['RGI60-11.00787']" + "rgi_ids = ['RGI60-11.00897'] # changed to HEF, because KWF does not show any spikes" ] }, { @@ -92,7 +92,7 @@ "# in OGGM v1.6 you have to explicitly indicate the url from where you want to start from\n", "# we will use here the elevation band flowlines which are much simpler than the centerlines\n", "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.3/elev_bands/W5E5/')\n", + " 'L3-L5_files/2025.6/elev_bands/W5E5/per_glacier')\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_border=80,\n", " prepro_base_url=base_url)" ] @@ -118,7 +118,7 @@ "outputs": [], "source": [ "workflow.execute_entity_task(tasks.run_random_climate, gdirs,\n", - " nyears=200, y0=2000, seed=5,\n", + " nyears=200, y0=1995, seed=5,\n", " output_filesuffix='_commitment');" ] }, @@ -206,8 +206,8 @@ "outputs": [], "source": [ "# Plot\n", - "ds.area.plot(label='Original');\n", - "ts.plot(label='Filtered');\n", + "ds.area.plot(label='Original')\n", + "ts.plot(label='Filtered')\n", "plt.legend();" ] }, @@ -229,8 +229,8 @@ "ts = ts.rolling(roll_yrs).min()\n", "ts.iloc[0:roll_yrs] = ts.iloc[roll_yrs]\n", "# Plot\n", - "ds.length.plot(label='Original');\n", - "ts.plot(label='Filtered');\n", + "ds.length.plot(label='Original')\n", + "ts.plot(label='Filtered')\n", "plt.legend();" ] }, diff --git a/notebooks/tutorials/building_the_prepro_gdirs.ipynb b/notebooks/tutorials/building_the_prepro_gdirs.ipynb index 7e161d34..738f6837 100644 --- a/notebooks/tutorials/building_the_prepro_gdirs.ipynb +++ b/notebooks/tutorials/building_the_prepro_gdirs.ipynb @@ -321,7 +321,7 @@ " ]\n", "\n", " for task in elevation_band_task_list:\n", - " workflow.execute_entity_task(task, gdirs);\n", + " workflow.execute_entity_task(task, gdirs)\n", "\n", "elif flowline_type_to_use == 'centerline':\n", " # for centerlines we can use parabola downstream line\n", @@ -424,16 +424,16 @@ "cfg.PARAMS['baseline_climate'] = cfg.PARAMS['baseline_climate']\n", "\n", "# add climate data to gdir\n", - "workflow.execute_entity_task(tasks.process_climate_data, gdirs);\n", + "workflow.execute_entity_task(tasks.process_climate_data, gdirs)\n", "\n", "# the default mb calibration\n", "workflow.execute_entity_task(tasks.mb_calibration_from_geodetic_mb,\n", " gdirs,\n", " informed_threestep=True, # only available for 'GSWP3_W5E5'\n", - " );\n", + " )\n", "\n", "# glacier bed inversion\n", - "workflow.execute_entity_task(tasks.apparent_mb_from_any_mb, gdirs);\n", + "workflow.execute_entity_task(tasks.apparent_mb_from_any_mb, gdirs)\n", "workflow.calibrate_inversion_from_consensus(\n", " gdirs,\n", " apply_fs_on_mismatch=True,\n", @@ -441,7 +441,7 @@ " filter_inversion_output=True, # this partly filters the overdeepening due to\n", " # the equilibrium assumption for retreating glaciers (see. Figure 5 of Maussion et al. 2019)\n", " volume_m3_reference=None, # here you could provide your own total volume estimate in m3\n", - ");\n", + ")\n", "\n", "# finally create the dynamic flowlines\n", "workflow.execute_entity_task(tasks.init_present_time_glacier, gdirs);" @@ -503,7 +503,8 @@ " if flowline_type_to_use == 'elevation_band':\n", " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", + " ###for centerlines, we only provide the spinup gdir for 2025.6. However, as we redo the steps, it does not matter\n", + " prepro_base_url_L3 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/'\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", "\n", @@ -591,7 +592,8 @@ " if flowline_type_to_use == 'elevation_band':\n", " prepro_base_url_L4 = DEFAULT_BASE_URL\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L4 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", + " ###for centerlines, we only provide the spinup gdir for 2025.6. However, in most cases this is what you want to use anyways.\n", + " prepro_base_url_L4 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/'\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", @@ -661,7 +663,8 @@ " if flowline_type_to_use == 'elevation_band':\n", " prepro_base_url_L5 = DEFAULT_BASE_URL\n", " elif flowline_type_to_use == 'centerline':\n", - " prepro_base_url_L5 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier/' ###todo\n", + " ###for centerlines, we only provide the spinup gdir for 2025.6. However, in most cases this is what you want to use anyways.\n", + " prepro_base_url_L5 = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/'\n", " else:\n", " raise ValueError(f\"Unknown flowline type '{flowline_type_to_use}'! Select 'elevation_band' or 'centerline'!\")\n", " gdirs = workflow.init_glacier_directories(rgi_ids,\n", @@ -691,6 +694,14 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8bf9e338d3864230", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/centerlines_to_shape.ipynb b/notebooks/tutorials/centerlines_to_shape.ipynb index 4a17863e..bae2df04 100644 --- a/notebooks/tutorials/centerlines_to_shape.ipynb +++ b/notebooks/tutorials/centerlines_to_shape.ipynb @@ -70,7 +70,7 @@ "dem = rioxr.open_rasterio(fpath_dem)\n", "\n", "f, ax = plt.subplots(figsize=(9, 9))\n", - "dem.plot(ax=ax, cmap='terrain', vmin=0);\n", + "dem.plot(ax=ax, cmap='terrain', vmin=0)\n", "inventory.plot(ax=ax, edgecolor='k', facecolor='C1');" ] }, @@ -199,9 +199,9 @@ "source": [ "gdirs = workflow.init_glacier_directories(gdf)\n", "\n", - "workflow.execute_entity_task(tasks.define_glacier_region, gdirs, source='USER'); # Use the user DEM\n", + "workflow.execute_entity_task(tasks.define_glacier_region, gdirs, source='USER') # Use the user DEM\n", "\n", - "workflow.execute_entity_task(tasks.glacier_masks, gdirs);\n", + "workflow.execute_entity_task(tasks.glacier_masks, gdirs)\n", "workflow.execute_entity_task(tasks.compute_centerlines, gdirs);" ] }, @@ -297,7 +297,7 @@ "sel_breID = 1189 # 5570\n", "\n", "f, ax = plt.subplots(figsize=(9, 4))\n", - "orig_inventory.loc[[sel_breID]].plot(ax=ax, facecolor='lightblue');\n", + "orig_inventory.loc[[sel_breID]].plot(ax=ax, facecolor='lightblue')\n", "cls_default.loc[[sel_breID]].plot(ax=ax);" ] }, @@ -360,8 +360,8 @@ "sel_breID = 1189\n", "\n", "f, ax = plt.subplots(figsize=(9, 4))\n", - "orig_inventory.loc[[sel_breID]].plot(ax=ax, facecolor='lightblue');\n", - "cls_default.loc[[sel_breID]].plot(ax=ax, color='C0', alpha=0.5);\n", + "orig_inventory.loc[[sel_breID]].plot(ax=ax, facecolor='lightblue')\n", + "cls_default.loc[[sel_breID]].plot(ax=ax, color='C0', alpha=0.5)\n", "cls_smooth.loc[[sel_breID]].plot(ax=ax, color='C3');" ] }, @@ -388,6 +388,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/deal_with_errors.ipynb b/notebooks/tutorials/deal_with_errors.ipynb index 0c47d224..c68ee968 100644 --- a/notebooks/tutorials/deal_with_errors.ipynb +++ b/notebooks/tutorials/deal_with_errors.ipynb @@ -91,7 +91,7 @@ "outputs": [], "source": [ "# Write the compiled output\n", - "utils.compile_glacier_statistics(gdirs); # saved as glacier_statistics.csv in the WORKING_DIR folder\n", + "utils.compile_glacier_statistics(gdirs) # saved as glacier_statistics.csv in the WORKING_DIR folder\n", "utils.compile_run_output(gdirs); # saved as run_output.nc in the WORKING_DIR folder" ] }, @@ -197,7 +197,7 @@ "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=DEFAULT_BASE_URL)\n", "workflow.execute_entity_task(tasks.run_random_climate, gdirs, y0=2000,\n", " nyears=150, seed=0,\n", - " temperature_bias=-2);\n", + " temperature_bias=-2)\n", "\n", "# recompute the output\n", "# we can also get the run output directly from the methods\n", diff --git a/notebooks/tutorials/dem_sources.ipynb b/notebooks/tutorials/dem_sources.ipynb index dd3a9715..b19d0280 100644 --- a/notebooks/tutorials/dem_sources.ipynb +++ b/notebooks/tutorials/dem_sources.ipynb @@ -120,7 +120,7 @@ "source": [ "da = rioxr.open_rasterio(dem_path)\n", "f, ax = plt.subplots()\n", - "da.plot(cmap='terrain', ax=ax);\n", + "da.plot(cmap='terrain', ax=ax)\n", "# Add the outlines\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] @@ -264,7 +264,7 @@ "source": [ "f, ax = plt.subplots()\n", "da_dem3 = rioxr.open_rasterio(gdir.get_filepath('dem'))\n", - "da_dem3.plot(cmap='terrain', ax=ax);\n", + "da_dem3.plot(cmap='terrain', ax=ax)\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] }, @@ -282,8 +282,8 @@ "outputs": [], "source": [ "f, ax = plt.subplots()\n", - "(da_dem3 - da).plot(ax=ax);\n", - "plt.title('DEM3 - SRTM');\n", + "(da_dem3 - da).plot(ax=ax)\n", + "plt.title('DEM3 - SRTM')\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] }, @@ -382,7 +382,7 @@ "source": [ "f, ax = plt.subplots()\n", "da_user = rioxr.open_rasterio(gdir.get_filepath('dem'))\n", - "da_user.plot(cmap='terrain', ax=ax);\n", + "da_user.plot(cmap='terrain', ax=ax)\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] }, @@ -433,7 +433,7 @@ "tasks.define_glacier_region(gdir)\n", "da = rioxr.open_rasterio(gdir.get_filepath('dem'))\n", "f, ax = plt.subplots()\n", - "da.plot(cmap='terrain', ax=ax);\n", + "da.plot(cmap='terrain', ax=ax)\n", "# Add the outlines\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] @@ -459,7 +459,7 @@ "tasks.define_glacier_region(gdir)\n", "da = rioxr.open_rasterio(gdir.get_filepath('dem'))\n", "f, ax = plt.subplots()\n", - "da.plot(cmap='terrain', ax=ax);\n", + "da.plot(cmap='terrain', ax=ax)\n", "# Add the outlines\n", "gdir.read_shapefile('outlines').plot(ax=ax, color='none', edgecolor='black');" ] @@ -473,6 +473,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/distribute_flowline.ipynb b/notebooks/tutorials/distribute_flowline.ipynb index e48622a1..5ee1f10d 100644 --- a/notebooks/tutorials/distribute_flowline.ipynb +++ b/notebooks/tutorials/distribute_flowline.ipynb @@ -197,7 +197,7 @@ "source": [ "# Initial glacier thickness\n", "f, ax = plt.subplots()\n", - "ds.distributed_thickness.plot(ax=ax);\n", + "ds.distributed_thickness.plot(ax=ax)\n", "ax.axis('equal');" ] }, @@ -212,8 +212,8 @@ "source": [ "# Which points belongs to which band, and then within one band which are the first to melt\n", "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))\n", - "ds.band_index.plot.contourf(ax=ax1);\n", - "ds.rank_per_band.plot(ax=ax2);\n", + "ds.band_index.plot.contourf(ax=ax1)\n", + "ds.rank_per_band.plot(ax=ax2)\n", "ax1.axis('equal'); ax2.axis('equal'); plt.tight_layout();" ] }, @@ -294,10 +294,10 @@ "source": [ "def plot_distributed_thickness(ds, title):\n", " f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14, 4))\n", - " ds.simulated_thickness.sel(time=2005).plot(ax=ax1, vmax=400);\n", - " ds.simulated_thickness.sel(time=2050).plot(ax=ax2, vmax=400);\n", - " ds.simulated_thickness.sel(time=2100).plot(ax=ax3, vmax=400);\n", - " ax1.axis('equal'); ax2.axis('equal'); f.suptitle(title, fontsize=20);\n", + " ds.simulated_thickness.sel(time=2005).plot(ax=ax1, vmax=400)\n", + " ds.simulated_thickness.sel(time=2050).plot(ax=ax2, vmax=400)\n", + " ds.simulated_thickness.sel(time=2100).plot(ax=ax3, vmax=400)\n", + " ax1.axis('equal'); ax2.axis('equal'); f.suptitle(title, fontsize=20)\n", " plt.tight_layout();\n", "\n", "plot_distributed_thickness(ds[0], 'Aletsch')\n", @@ -323,8 +323,8 @@ "source": [ "def plot_area(ds, gdir, title):\n", " area = (ds.simulated_thickness > 0).sum(dim=['x', 'y']) * gdir.grid.dx**2 * 1e-6\n", - " area.plot(label='Distributed area');\n", - " plt.hlines(gdir.rgi_area_km2, gdir.rgi_date, 2100, color='C3', linestyles='--', label='RGI Area');\n", + " area.plot(label='Distributed area')\n", + " plt.hlines(gdir.rgi_area_km2, gdir.rgi_date, 2100, color='C3', linestyles='--', label='RGI Area')\n", " plt.legend(loc='lower left'); plt.ylabel('Area [km2]'); plt.title(title, fontsize=20); plt.show();\n", "\n", "\n", @@ -351,7 +351,7 @@ "source": [ "def plot_volume(ds, gdir, title):\n", " vol = ds.simulated_thickness.sum(dim=['x', 'y']) * gdir.grid.dx**2 * 1e-9\n", - " vol.plot(label='Distributed volume'); plt.ylabel('Distributed volume [km3]');\n", + " vol.plot(label='Distributed volume'); plt.ylabel('Distributed volume [km3]')\n", " plt.title(title, fontsize=20); plt.show();\n", "\n", "\n", @@ -401,7 +401,7 @@ "outputs": [], "source": [ "from matplotlib import animation\n", - "from IPython.display import HTML, display\n", + "from IPython.display import HTML\n", "\n", "# Get a handle on the figure and the axes\n", "fig, ax = plt.subplots()\n", @@ -507,10 +507,10 @@ "source": [ "def plot_area_smoothed(ds_smooth, ds, gdir, title):\n", " area = (ds.simulated_thickness > 0).sum(dim=['x', 'y']) * gdir.grid.dx**2 * 1e-6\n", - " area.plot(label='Distributed area (raw)');\n", + " area.plot(label='Distributed area (raw)')\n", " area = (ds_smooth.simulated_thickness > 0).sum(dim=['x', 'y']) * gdir.grid.dx**2 * 1e-6\n", - " area.plot(label='Distributed area (smooth)');\n", - " plt.legend(loc='lower left'); plt.ylabel('Area [km2]');\n", + " area.plot(label='Distributed area (smooth)')\n", + " plt.legend(loc='lower left'); plt.ylabel('Area [km2]')\n", " plt.title(title, fontsize=20); plt.show();\n", "\n", "\n", @@ -819,6 +819,14 @@ "source": [ "The WIP tool is available here: https://github.com/OGGM/oggm-3dviz" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9c7a9237721c1d87", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/dynamical_spinup.ipynb b/notebooks/tutorials/dynamical_spinup.ipynb index 3fd9f2a2..337900a1 100644 --- a/notebooks/tutorials/dynamical_spinup.ipynb +++ b/notebooks/tutorials/dynamical_spinup.ipynb @@ -37,7 +37,7 @@ "- Get the model area or volume of the glacier at *t_end*.\n", "- Compare the model value to the reference value we want to meet.\n", "- If the difference is inside a given precision, stop the procedure and save the glacier evolution of this run.\n", - "- If the difference is outside a given precision, change the temperature bias for mb_spinup and start over again (how the next guess is found is descript [here](#The-minimization-algorithm:)).\n", + "- If the difference is outside a given precision, change the temperature bias for mb_spinup and start over again (how the next guess is found is descript [here](#the-minimization-algorithm)).\n", "\n", "With OGGM version 1.6.1 it is now also possible to provide a custom target year ```target_yr``` with target value ```target_value``` to match. This could be useful if you have more data available than on a global scale (e.g. an extra outline at a later date than the RGI).\n", "\n", @@ -62,7 +62,7 @@ "- calculate the modelled geodetic mass balance\n", "- calculate the difference between modelled value and observation\n", "- if the difference is inside the observation uncertainty stop and save the model run (indicated in diagnostics with ```used_spinup_option = dynamic melt_f calibration (full success)```)\n", - "- if the difference is larger, define a new *melt_f* and start over again (how the next guess is found is described [here](#The-minimization-algorithm:))\n", + "- if the difference is larger, define a new *melt_f* and start over again (how the next guess is found is described [here](#the-minimization-algorithm))\n", "\n", "If the iterative search is not successful and ```ignore_errors = True``` there are several possible outcomes:\n", "\n", @@ -74,7 +74,7 @@ "\n", "## The minimization algorithm:\n", "\n", - "To start, a first guess of the control variable (temperature bias or *melt_f*) is used and evaluated. If by chance, the mismatch between model and observation is close enough, the algorithm stops already. Otherwise, the second guess depends on the calculated first guess mismatch. For example, if the first resulting area is smaller (larger) than the searched one, the second temperature bias will be colder (warmer). Because a colder (warmer) temperature leads to a larger (smaller) initial glacier state at *t_start*. If the second guess is still unsuccessful, for all consecutive guesses the previous value pairs (control variable, mismatch) are used to determine the next guess. For this, a stepwise linear function is fitted to these pairs and afterwards, the mismatch is set to 0 to get the following guess (this method is similar to the one described in Zekollari et al. 2019 Appendix A). Moreover, a maximum step length between two guesses is defined as too large step-sizes could easily lead to failing model runs (e.g. see [here](#Two-main-problems-why-the-dynamic-spinup-could-not-work:)). Further, the algorithm was adapted independently inside ```run_dynamic_spinup``` and ```run_dynamic_melt_f_calibration``` to cope with failing model runs individually. Note that this minimization algorithm only works if the underlying relationship between the control variable and the mismatch is strictly monotone.\n", + "To start, a first guess of the control variable (temperature bias or *melt_f*) is used and evaluated. If by chance, the mismatch between model and observation is close enough, the algorithm stops already. Otherwise, the second guess depends on the calculated first guess mismatch. For example, if the first resulting area is smaller (larger) than the searched one, the second temperature bias will be colder (warmer). Because a colder (warmer) temperature leads to a larger (smaller) initial glacier state at *t_start*. If the second guess is still unsuccessful, for all consecutive guesses the previous value pairs (control variable, mismatch) are used to determine the next guess. For this, a stepwise linear function is fitted to these pairs and afterwards, the mismatch is set to 0 to get the following guess (this method is similar to the one described in Zekollari et al. 2019 Appendix A). Moreover, a maximum step length between two guesses is defined as too large step-sizes could easily lead to failing model runs (e.g. see [here](#two-main-problems-why-the-dynamic-spinup-could-not-work)). Further, the algorithm was adapted independently inside ```run_dynamic_spinup``` and ```run_dynamic_melt_f_calibration``` to cope with failing model runs individually. Note that this minimization algorithm only works if the underlying relationship between the control variable and the mismatch is strictly monotone.\n", "\n", "If someone is interested in how this algorithm works in more detail, here is a conceptual code snippet:" ] @@ -249,7 +249,7 @@ "outputs": [], "source": [ "# We use a relatively large border value to allow the glacier to grow during spinup\n", - "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, #todo <-- does that work, or do we need 160?\n", + "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80,\n", " prepro_base_url=base_url)" ] }, @@ -353,7 +353,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Now make a plot for comparision\n", + "# Now make a plot for comparison\n", "y0 = gdir.rgi_date + 1\n", "\n", "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", @@ -422,7 +422,7 @@ "outputs": [], "source": [ "# define an artificial error for dmdtda\n", - "dmdtda_reference_error_artificial = 15 # error must be given as a positive number\n", + "dmdtda_reference_error_artificial = 30 # error must be given as a positive number\n", "\n", "tasks.run_dynamic_melt_f_calibration(gdir,\n", " ys=spinup_start_yr, # When to start the spinup\n", @@ -430,7 +430,7 @@ " output_filesuffix='_dynamic_melt_f_artificial', # Where to write the output\n", " ref_dmdtda=dmdtda_reference, # user-provided geodetic mass balance observation\n", " err_ref_dmdtda=dmdtda_reference_error_artificial, # uncertainty of user-provided geodetic mass balance observation \n", - " );\n", + " )\n", "\n", "with xr.open_dataset(gdir.get_filepath('model_diagnostics', filesuffix='_dynamic_melt_f_artificial')) as ds:\n", " ds_dynamic_melt_f_artificial = ds.load()" @@ -447,32 +447,32 @@ "\n", "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", "\n", - "ds_hist.volume_m3.plot(ax=ax1, label='Fixed geometry spinup');\n", - "ds_dynamic_melt_f.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration (original error)');\n", - "ds_dynamic_melt_f_artificial.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration (aritificial error)');\n", - "ds_dynamic_spinup_area.volume_m3.plot(ax=ax1, label='Dynamical spinup match area');\n", - "ax1.set_title('Volume');\n", - "ax1.scatter(y0, volume_reference, c='C3', label='Reference values');\n", - "ax1.legend();\n", - "\n", - "ds_hist.area_m2.plot(ax=ax2);\n", - "ds_dynamic_melt_f.area_m2.plot(ax=ax2);\n", - "ds_dynamic_melt_f_artificial.area_m2.plot(ax=ax2);\n", - "ds_dynamic_spinup_area.area_m2.plot(ax=ax2);\n", - "ax2.set_title('Area');\n", + "ds_hist.volume_m3.plot(ax=ax1, label='Fixed geometry spinup')\n", + "ds_dynamic_melt_f.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration (original error)')\n", + "ds_dynamic_melt_f_artificial.volume_m3.plot(ax=ax1, label='Dynamical melt_f calibration (artificial error)')\n", + "ds_dynamic_spinup_area.volume_m3.plot(ax=ax1, label='Dynamical spinup match area')\n", + "ax1.set_title('Volume')\n", + "ax1.scatter(y0, volume_reference, c='C3', label='Reference values')\n", + "ax1.legend()\n", + "\n", + "ds_hist.area_m2.plot(ax=ax2)\n", + "ds_dynamic_melt_f.area_m2.plot(ax=ax2)\n", + "ds_dynamic_melt_f_artificial.area_m2.plot(ax=ax2)\n", + "ds_dynamic_spinup_area.area_m2.plot(ax=ax2)\n", + "ax2.set_title('Area')\n", "ax2.scatter(y0, area_reference, c='C3')\n", "\n", - "ds_hist.length_m.plot(ax=ax3);\n", - "ds_dynamic_melt_f.length_m.plot(ax=ax3);\n", - "ds_dynamic_melt_f_artificial.length_m.plot(ax=ax3);\n", - "ds_dynamic_spinup_area.length_m.plot(ax=ax3);\n", - "ax3.set_title('Length');\n", + "ds_hist.length_m.plot(ax=ax3)\n", + "ds_dynamic_melt_f.length_m.plot(ax=ax3)\n", + "ds_dynamic_melt_f_artificial.length_m.plot(ax=ax3)\n", + "ds_dynamic_spinup_area.length_m.plot(ax=ax3)\n", + "ax3.set_title('Length')\n", "ax3.scatter(y0, ds_hist.sel(time=y0).length_m, c='C3')\n", "\n", "plt.tight_layout()\n", "plt.show();\n", "\n", - "# and print out the modeled geodetic mass balances for comparision\n", + "# and print out the modeled geodetic mass balances for comparison\n", "def get_dmdtda(ds):\n", " yr0_ref_mb, yr1_ref_mb = ref_period.split('_')\n", " yr0_ref_mb = int(yr0_ref_mb.split('-')[0])\n", @@ -593,7 +593,7 @@ "outputs": [], "source": [ "def plot_dynamic_spinup_bad_glacier():\n", - " # Now make a plot for comparision\n", + " # Now make a plot for comparison\n", " y0 = gdir.rgi_date + 1\n", "\n", " f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 5))\n", @@ -620,7 +620,7 @@ " plt.tight_layout()\n", " plt.show();\n", "\n", - " # and print out the modeled geodetic mass balances for comparision\n", + " # and print out the modeled geodetic mass balances for comparison\n", " def get_dmdtda(ds):\n", " yr0_ref_mb, yr1_ref_mb = ref_period.split('_')\n", " yr0_ref_mb = int(yr0_ref_mb.split('-')[0])\n", @@ -639,7 +639,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this example, you see that the dynamic spinup run matching area does not start in 1980. The reason for this are the two main problems and the coping strategy of reducing the spinup time, described [here](#Two-main-problems-why-the-dynamic-spinup-could-not-work:) in more detail.\n", + "In this example, you see that the dynamic spinup run matching area does not start in 1980. The reason for this are the two main problems and the coping strategy of reducing the spinup time, described [here](#two-main-problems-why-the-dynamic-spinup-could-not-work) in more detail.\n", "\n", "To get a glacier evolution starting at 1980 you can use ```add_fixed_geometry_spinup = True```:" ] @@ -669,7 +669,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "You see that a fixed geomtery was added to the dynamical spinup matching the area (orange curve). With this you can be sure that all glaciers start at the same year." + "You see that a fixed geometry was added to the dynamical spinup matching the area (orange curve). With this you can be sure that all glaciers start at the same year." ] }, { @@ -764,7 +764,7 @@ "It consists of the following components:\n", "- melt off-glacier: snow melt on areas that are now glacier free (i.e. 0 in the year of largest glacier extent, in this example at the start of the simulation)\n", "- melt on-glacier: ice + seasonal snow melt on the glacier\n", - "- liquid precipitaton on- and off-glacier (the latter being zero at the year of largest glacial extent, in this example at start of the simulation)" + "- liquid precipitation on- and off-glacier (the latter being zero at the year of largest glacial extent, in this example at start of the simulation)" ] }, { @@ -830,8 +830,15 @@ "## What's next?\n", "\n", "- return to the [OGGM documentation](https://docs.oggm.org)\n", - "- back to the [table of contents](welcome.ipynb)" + "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb b/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb index 41a784dc..4faa8968 100644 --- a/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb +++ b/notebooks/tutorials/elevation_bands_vs_centerlines.ipynb @@ -90,7 +90,7 @@ "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-centerlines', reset=True)\n", "\n", "# We start from prepro level 3 with all data ready - note the url here\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup'\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, prepro_base_url=base_url)\n", "gdir_cl = gdirs[0]\n", "gdir_cl" @@ -109,7 +109,7 @@ "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-elevbands', reset=True)\n", "\n", "# Note the new url\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier_spinup'\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, prepro_base_url=base_url)\n", "gdir_eb = gdirs[0]\n", "gdir_eb" @@ -207,7 +207,7 @@ "source": [ "from oggm.shop import gcm_climate\n", "\n", - "# you can choose one of these 5 different GCMs:\n", + "# you can choose for example one of these 5 primary ISIMIP3b GCMs:\n", "# 'gfdl-esm4_r1i1p1f1', 'mpi-esm1-2-hr_r1i1p1f1', 'mri-esm2-0_r1i1p1f1' (\"low sensitivity\" models, within typical ranges from AR6)\n", "# 'ipsl-cm6a-lr_r1i1p1f1', 'ukesm1-0-ll_r1i1p1f2' (\"hotter\" models, especially ukesm1-0-ll)\n", "member = 'mri-esm2-0_r1i1p1f1' \n", @@ -250,7 +250,7 @@ "\n", " workflow.execute_entity_task(tasks.run_from_climate_data, [gdir],\n", " output_filesuffix='_historical', \n", - " );\n", + " )\n", "\n", " for ssp in ['ssp126', 'ssp370', 'ssp585']:\n", " rid = f'_ISIMIP3b_{member}_{ssp}'\n", @@ -277,11 +277,11 @@ "for ssp in ['ssp126','ssp370', 'ssp585']:\n", " rid = f'_ISIMIP3b_{member}_{ssp}'\n", " with xr.open_dataset(gdir_cl.get_filepath('model_diagnostics', filesuffix=rid)) as ds:\n", - " ds.volume_m3.plot(ax=ax1, label=ssp, c=color_dict[ssp]);\n", + " ds.volume_m3.plot(ax=ax1, label=ssp, c=color_dict[ssp])\n", "for ssp in ['ssp126','ssp370', 'ssp585']:\n", " rid = f'_ISIMIP3b_{member}_{ssp}'\n", " with xr.open_dataset(gdir_eb.get_filepath('model_diagnostics', filesuffix=rid)) as ds:\n", - " ds.volume_m3.plot(ax=ax1, label=ssp, c=color_dict[ssp], ls='--');\n", + " ds.volume_m3.plot(ax=ax1, label=ssp, c=color_dict[ssp], ls='--')\n", " ax1.set_title('Glacier volume')\n", " ax1.set_xlim([2020,2100])\n", " ax1.set_ylim([0, ds.volume_m3.max().max()*1.1])\n", @@ -289,7 +289,7 @@ "for ssp in ['ssp126','ssp370', 'ssp585']:\n", " rid = f'_ISIMIP3b_{member}_{ssp}'\n", " with xr.open_dataset(gdir_cl.get_filepath('model_diagnostics', filesuffix=rid)) as ds:\n", - " ds.length_m.plot(ax=ax2, label=ssp, c=color_dict[ssp]);\n", + " ds.length_m.plot(ax=ax2, label=ssp, c=color_dict[ssp])\n", " ax2.set_ylim([0, ds.length_m.max().max()*1.1])\n", "for ssp in ['ssp126','ssp370', 'ssp585']:\n", " rid = f'_ISIMIP3b_{member}_{ssp}'\n", diff --git a/notebooks/tutorials/full_prepro_workflow.ipynb b/notebooks/tutorials/full_prepro_workflow.ipynb index b839f48e..5f3229a1 100644 --- a/notebooks/tutorials/full_prepro_workflow.ipynb +++ b/notebooks/tutorials/full_prepro_workflow.ipynb @@ -223,7 +223,7 @@ "outputs": [], "source": [ "# Where to fetch the pre-processed directories\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/'\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url, prepro_border=80)" ] }, @@ -350,7 +350,7 @@ "**Entity Tasks**:\n", " Standalone operations to be realized on one single glacier entity,\n", " independent of the others. The majority of OGGM\n", - " tasks are entity tasks. They are parallelisable: the same task can run on \n", + " tasks are entity tasks. They are parallelisable: the same task can run on\n", " several glaciers in parallel.\n", "\n", "**Global Task**:\n", @@ -709,7 +709,7 @@ "outputs": [], "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4))\n", - "ds2000.volume.plot.line(ax=ax1, hue='rgi_id');\n", + "ds2000.volume.plot.line(ax=ax1, hue='rgi_id')\n", "ds2000.length.plot.line(ax=ax2, hue='rgi_id');" ] }, @@ -755,7 +755,7 @@ "source": [ "workflow.execute_entity_task(tasks.run_random_climate, gdirs, nyears=200,\n", " temperature_bias=0.5,\n", - " y0=2000, output_filesuffix='_p05');\n", + " y0=2000, output_filesuffix='_p05')\n", "workflow.execute_entity_task(tasks.run_random_climate, gdirs, nyears=200,\n", " temperature_bias=-0.5,\n", " y0=2000, output_filesuffix='_m05');" @@ -779,15 +779,15 @@ "source": [ "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4))\n", "rgi_id = 'RGI60-11.01328'\n", - "ds2000.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='Commitment');\n", - "ds2000.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id');\n", - "ds2000.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id');\n", - "dsp.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='$+$ 0.5°C');\n", - "dsp.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id');\n", - "dsp.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id');\n", - "dsm.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='$-$ 0.5°C');\n", - "dsm.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id');\n", - "dsm.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id');\n", + "ds2000.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='Commitment')\n", + "ds2000.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id')\n", + "ds2000.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id')\n", + "dsp.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='$+$ 0.5°C')\n", + "dsp.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id')\n", + "dsp.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id')\n", + "dsm.sel(rgi_id=rgi_id).volume.plot.line(ax=ax1, hue='rgi_id', label='$-$ 0.5°C')\n", + "dsm.sel(rgi_id=rgi_id).area.plot.line(ax=ax2, hue='rgi_id')\n", + "dsm.sel(rgi_id=rgi_id).length.plot.line(ax=ax3, hue='rgi_id')\n", "ax1.legend();" ] }, @@ -807,6 +807,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb b/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb index 6761ee4f..d968986d 100644 --- a/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb +++ b/notebooks/tutorials/ingest_gridded_data_on_flowlines.ipynb @@ -92,7 +92,7 @@ "from_prepro_level = 3\n", "# URL of the preprocessed gdirs\n", "# we use elevation bands flowlines here\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier'\n", "gdirs = workflow.init_glacier_directories(rgi_ids,\n", " from_prepro_level=from_prepro_level,\n", " prepro_base_url=base_url,\n", @@ -118,7 +118,8 @@ }, "outputs": [], "source": [ - "graphics.plot_googlemap(gdir, figsize=(8, 7))" + "graphics.plot_googlemap(gdir, figsize=(8, 7))\n", + "# You have to manually add an API KEY. If you run it on jupyter hub or binder, we do that for you." ] }, { @@ -389,9 +390,16 @@ "source": [ "# attention downloads data!!!\n", "from oggm.shop import millan22\n", - "workflow.execute_entity_task(millan22.velocity_to_gdir, gdirs);" + "workflow.execute_entity_task(millan22.millan_velocity_to_gdir, gdirs);" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, { "cell_type": "markdown", "metadata": {}, @@ -913,6 +921,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/inversion.ipynb b/notebooks/tutorials/inversion.ipynb index 0140efbf..84ab1e44 100644 --- a/notebooks/tutorials/inversion.ipynb +++ b/notebooks/tutorials/inversion.ipynb @@ -82,7 +82,7 @@ "# (we specifically need `geometries.pkl` in the gdirs)\n", "cfg.PARAMS['border'] = 80\n", "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.3/centerlines/W5E5/')\n", + " 'L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup') # todo<-- here is an issue with preprocessed gdir...todo\n", "gdirs = workflow.init_glacier_directories(rgidf, from_prepro_level=3,\n", " prepro_base_url=base_url)\n", "\n", @@ -307,7 +307,7 @@ "metadata": {}, "outputs": [], "source": [ - "dftot.plot();\n", + "dftot.plot()\n", "plt.xlabel('Factor of Glen A (default 1)'); plt.ylabel('Regional volume (km$^3$)');" ] }, @@ -435,11 +435,11 @@ "metadata": {}, "outputs": [], "source": [ - "# save the distributed ice thickness into a geotiff file\n", + "# save the distributed ice thickness into a geotiff file\n", "workflow.execute_entity_task(tasks.gridded_data_var_to_geotiff, gdirs, varname='distributed_thickness')\n", "\n", - "# The default path of the geotiff file is in the glacier directory with the name \"distributed_thickness.tif\"\n", - "# Let's check if the file exists\n", + "# The default path of the geotiff file is in the glacier directory with the name \"distributed_thickness.tif\"\n", + "# Let's check if the file exists\n", "for gdir in gdirs:\n", " path = os.path.join(gdir.dir, 'distributed_thickness.tif')\n", " assert os.path.exists(path)" @@ -485,7 +485,8 @@ "rgi_ids = ['RGI60-11.0{}'.format(i) for i in range(3205, 3211)]\n", "sel_gdirs = [gdir for gdir in gdirs if gdir.rgi_id in rgi_ids]\n", "graphics.plot_googlemap(sel_gdirs)\n", - "# you might need to install motionless if it is not yet in your environment" + "# you might need to install motionless if it is not yet in your environment\n", + "# You have to manually add an API KEY. If you run it on jupyter hub or binder, we do that for you." ] }, { @@ -615,6 +616,13 @@ "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/kcalving_parameterization.ipynb b/notebooks/tutorials/kcalving_parameterization.ipynb index 9a0ed23f..53c2d88b 100644 --- a/notebooks/tutorials/kcalving_parameterization.ipynb +++ b/notebooks/tutorials/kcalving_parameterization.ipynb @@ -13,7 +13,7 @@ "source": [ "

\n", " \n", - " The calving parameterization in OGGM has been developped for versions of OGGM before 1.6 (mostly: 1.5.3). The v1.6 series brought several changes in the mass balance calibration which made a lot of the calibration code obsolete and in need of updates.As of today (August 2024), calving dynamics are implemented in the dynamical model and they \"work\", as illustrated below. However, there is still quite some work to make calving (including calibration) fully operational for large scale runs.
See this github issue for a longer discussion.\n", + " The calving parameterization in OGGM has been developped for versions of OGGM before 1.6 (mostly: 1.5.3). The v1.6 series brought several changes in the mass balance calibration which made a lot of the calibration code obsolete and in need of updates. As of today (August 2024), calving dynamics are implemented in the dynamical model and they \"work\", as illustrated below. However, there is still quite some work to make calving (including calibration) fully operational for large scale runs.

See this github issue for a longer discussion.\n", "
" ] }, @@ -136,7 +136,7 @@ " keys.append(key)\n", " \n", " # Plot of volume\n", - " (ds.volume_m3 * 1e-9).plot(label=key);\n", + " (ds.volume_m3 * 1e-9).plot(label=key)\n", "plt.legend(); plt.ylabel('Volume [km$^{3}$]');\n", "to_plot.index = xc" ] @@ -148,7 +148,7 @@ "outputs": [], "source": [ "f, ax = plt.subplots(1, 1, figsize=(12, 5))\n", - "to_plot[keys].plot(ax=ax);\n", + "to_plot[keys].plot(ax=ax)\n", "to_plot.bed_h.plot(ax=ax, color='k')\n", "plt.hlines(0, *xc[[0, -1]], color='C0', linestyles=':')\n", "plt.ylim(-350, 1000); plt.ylabel('Altitude [m]'); plt.xlabel('Distance along flowline [km]');" @@ -203,7 +203,7 @@ " keys.append(key)\n", " \n", " # Plot of volume\n", - " (ds.volume_m3 * 1e-9).plot(label=key);\n", + " (ds.volume_m3 * 1e-9).plot(label=key)\n", "plt.legend(); plt.ylabel('Volume [km$^{3}$]');\n", "to_plot.index = xc" ] @@ -215,7 +215,7 @@ "outputs": [], "source": [ "f, ax = plt.subplots(1, 1, figsize=(12, 5))\n", - "to_plot[keys].plot(ax=ax);\n", + "to_plot[keys].plot(ax=ax)\n", "to_plot.bed_h.plot(ax=ax, color='k')\n", "plt.hlines(0, *xc[[0, -1]], color='C0', linestyles=':')\n", "plt.ylim(-350, 1000); plt.ylabel('Altitude [m]'); plt.xlabel('Distance along flowline [km]');" @@ -315,10 +315,10 @@ "# The plot\n", "f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(9, 9), sharex=True)\n", "ts = df['Forcing']\n", - "ts.plot(ax=ax1, color='C0');\n", + "ts.plot(ax=ax1, color='C0')\n", "ax1.set_ylabel(ts.name)\n", "ts = df['Length [m]']\n", - "ts.plot(ax=ax2, color='C1');\n", + "ts.plot(ax=ax2, color='C1')\n", "ax2.hlines(deep_val, deep_t0, deep_t1, color='black', linestyles=':')\n", "ax2.hlines(deep_val, deep_t2, 6000, color='black', linestyles=':')\n", "ax2.hlines(bump_val, bump_t0, bump_t1, color='grey', linestyles='--')\n", @@ -433,10 +433,10 @@ "# The plot\n", "f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(9, 9), sharex=True)\n", "ts = df['Forcing']\n", - "ts.plot(ax=ax1, color='C0');\n", + "ts.plot(ax=ax1, color='C0')\n", "ax1.set_ylabel(ts.name)\n", "ts = df['Length [m]']\n", - "ts.plot(ax=ax2, color='C1');\n", + "ts.plot(ax=ax2, color='C1')\n", "ax2.hlines(deep_val, deep_t0, deep_t1, color='black', linestyles=':')\n", "ax2.hlines(deep_val, deep_t2, 6000, color='black', linestyles=':')\n", "ax2.hlines(bump_val, bump_t0, bump_t1, color='grey', linestyles='--')\n", @@ -447,7 +447,7 @@ "ts = df['Calving rate [m y$^{-1}$]'].rolling(11, center=True).max()\n", "ts.plot(ax=ax3, color='C3')\n", "ax3.vlines([deep_t0, deep_t1, deep_t2], ts.min(), ts.max(), color='black', linestyles=':')\n", - "ax3.vlines([bump_t0, bump_t1], ts.min(), ts.max(), color='grey', linestyles='--');\n", + "ax3.vlines([bump_t0, bump_t1], ts.min(), ts.max(), color='grey', linestyles='--')\n", "ax3.set_ylabel(ts.name); ax3.set_xlabel('Years'); ax3.set_ylim(0, 150);" ] }, @@ -457,7 +457,7 @@ "metadata": {}, "outputs": [], "source": [ - "ds.length_m.plot();\n", + "ds.length_m.plot()\n", "ds_new.length_m.plot();" ] }, @@ -468,8 +468,15 @@ "## What's next?\n", "\n", "- return to the [OGGM documentation](https://docs.oggm.org)\n", - "- back to the [table of contents](welcome.ipynb)" + "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/massbalance_calibration.ipynb b/notebooks/tutorials/massbalance_calibration.ipynb index 35731b60..42bd38e3 100644 --- a/notebooks/tutorials/massbalance_calibration.ipynb +++ b/notebooks/tutorials/massbalance_calibration.ipynb @@ -54,7 +54,7 @@ "source": [ "cfg.initialize(logging_level='WARNING')\n", "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-calib-mb', reset=True)\n", - "cfg.PARAMS['border'] = 80 # 10, todo: replace back to 10, once available" + "cfg.PARAMS['border'] = 80" ] }, { diff --git a/notebooks/tutorials/massbalance_global_params.ipynb b/notebooks/tutorials/massbalance_global_params.ipynb index 6ae598e0..242ce7aa 100644 --- a/notebooks/tutorials/massbalance_global_params.ipynb +++ b/notebooks/tutorials/massbalance_global_params.ipynb @@ -162,16 +162,16 @@ "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", "\n", - "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency');\n", - "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", - "ax1.set_title('Melt factor distribution (global)');\n", - "ax1.set_ylabel('Frequency (%)');\n", - "ax1.legend();\n", + "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency')\n", + "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", + "ax1.set_title('Melt factor distribution (global)')\n", + "ax1.set_ylabel('Frequency (%)')\n", + "ax1.legend()\n", "\n", - "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency');\n", - "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", + "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency')\n", + "df_params['melt_f'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", "ax2.set_yscale('log')\n", - "ax2.set_title('Melt factor distribution (log scale)');\n", + "ax2.set_title('Melt factor distribution (log scale)')\n", "ax2.set_ylabel('Frequency (log scale)');" ] }, @@ -192,16 +192,16 @@ "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", "\n", - "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency');\n", - "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", - "ax1.set_title('Precipitation factor distribution (global)');\n", - "ax1.set_ylabel('Frequency (%)');\n", - "ax1.legend();\n", + "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency')\n", + "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", + "ax1.set_title('Precipitation factor distribution (global)')\n", + "ax1.set_ylabel('Frequency (%)')\n", + "ax1.legend()\n", "\n", - "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency');\n", - "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", + "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency')\n", + "df_params['prcp_fac'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", "ax2.set_yscale('log')\n", - "ax2.set_title('Precipitation factor distribution (log scale)');\n", + "ax2.set_title('Precipitation factor distribution (log scale)')\n", "ax2.set_ylabel('Frequency (log scale)');" ] }, @@ -233,16 +233,16 @@ "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", "\n", - "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency');\n", - "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", - "ax1.set_title('Temperature bias distribution (global)');\n", - "ax1.set_ylabel('Frequency (%)');\n", - "ax1.legend();\n", + "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax1, alpha=0.5, label='Frequency')\n", + "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax1, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", + "ax1.set_title('Temperature bias distribution (global)')\n", + "ax1.set_ylabel('Frequency (%)')\n", + "ax1.legend()\n", "\n", - "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency');\n", - "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted');\n", + "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax2, alpha=0.5, label='Frequency')\n", + "df_params['temp_bias'].plot.hist(bins=51, density=True, ax=ax2, weights=df_params['rgi_area_km2'], alpha=0.5, label='Area weighted')\n", "ax2.set_yscale('log')\n", - "ax2.set_title('Temperature bias distribution (log scale)');\n", + "ax2.set_title('Temperature bias distribution (log scale)')\n", "ax2.set_ylabel('Frequency (log scale)');" ] }, @@ -351,16 +351,16 @@ "source": [ "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", "bins = np.linspace(0.1, 18, 51)\n", - "df_params['melt_f'].plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Static');\n", - "df_params['melt_f_dyna'].plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Dynamic');\n", - "ax1.set_title('Melt factor distribution (global)');\n", - "ax1.set_ylabel('Frequency (%)');\n", - "ax1.legend();\n", + "df_params['melt_f'].plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Static')\n", + "df_params['melt_f_dyna'].plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Dynamic')\n", + "ax1.set_title('Melt factor distribution (global)')\n", + "ax1.set_ylabel('Frequency (%)')\n", + "ax1.legend()\n", "\n", - "df_params['melt_f'].plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Static');\n", - "df_params['melt_f_dyna'].plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Dynamic');\n", + "df_params['melt_f'].plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Static')\n", + "df_params['melt_f_dyna'].plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Dynamic')\n", "ax2.set_yscale('log')\n", - "ax2.set_title('Melt factor distribution (log scale)');\n", + "ax2.set_title('Melt factor distribution (log scale)')\n", "ax2.set_ylabel('Frequency (log scale)');" ] }, @@ -382,14 +382,14 @@ "diff = df_params['melt_f_dyna'] - df_params['melt_f']\n", "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n", "bins = np.linspace(-5, 5, 51)\n", - "diff.plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Static');\n", - "ax1.set_title('Melt factor change after spinup (global)');\n", - "ax1.set_ylabel('Frequency (%)');\n", - "ax1.legend();\n", + "diff.plot.hist(bins=bins, density=True, ax=ax1, alpha=0.5, label='Static')\n", + "ax1.set_title('Melt factor change after spinup (global)')\n", + "ax1.set_ylabel('Frequency (%)')\n", + "ax1.legend()\n", "\n", - "diff.plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Static');\n", + "diff.plot.hist(bins=bins, density=True, ax=ax2, alpha=0.5, label='Static')\n", "ax2.set_yscale('log')\n", - "ax2.set_title('Melt factor change after spinup (log scale)');\n", + "ax2.set_title('Melt factor change after spinup (log scale)')\n", "ax2.set_ylabel('Frequency (log scale)');" ] }, diff --git a/notebooks/tutorials/massbalance_perturbation.ipynb b/notebooks/tutorials/massbalance_perturbation.ipynb index b39e0720..f71a7154 100644 --- a/notebooks/tutorials/massbalance_perturbation.ipynb +++ b/notebooks/tutorials/massbalance_perturbation.ipynb @@ -257,8 +257,8 @@ "metadata": {}, "outputs": [], "source": [ - "ds_default.volume_m3.plot(label='Default');\n", - "ds_perturbed.volume_m3.plot(label='Perturbed');\n", + "ds_default.volume_m3.plot(label='Default')\n", + "ds_perturbed.volume_m3.plot(label='Perturbed')\n", "plt.legend();" ] }, diff --git a/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb b/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb index 48a03fa2..2d1e98c8 100644 --- a/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb +++ b/notebooks/tutorials/merge_gcm_runs_and_visualize.ipynb @@ -112,7 +112,7 @@ "id": "8", "metadata": {}, "source": [ - "In this notebook, we will use the bias-corrected ISIMIP3b GCM files. You can also use directly CMIP5 or CMIP6 (how to do that is explained in [run_with_gcm](../10minutes/run_with_gcm.ipynb))." + "In this notebook, we will use the primary bias-corrected ISIMIP3b GCM files. You can also use directly CMIP5 or CMIP6 (how to do that is explained in [run_with_gcm](../10minutes/run_with_gcm.ipynb))." ] }, { @@ -628,8 +628,8 @@ " else:\n", " # for the second plot, we only want to have the legend for mean and std\n", " ax.legend(handles[::3], labels[::3], loc='lower left') \n", - " ax.set_xlabel('Year');\n", - " ax.grid()" + " ax.set_xlabel('Year')\n", + " ax.grid();" ] }, { diff --git a/notebooks/tutorials/numeric_solvers.ipynb b/notebooks/tutorials/numeric_solvers.ipynb index b934ccf3..efa35e3a 100644 --- a/notebooks/tutorials/numeric_solvers.ipynb +++ b/notebooks/tutorials/numeric_solvers.ipynb @@ -32,6 +32,16 @@ "from oggm.core.flowline import FluxBasedModel, SemiImplicitModel" ] }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7b417acffaed499", + "metadata": {}, + "outputs": [], + "source": [ + "workflow.init_glacier_directories" + ] + }, { "cell_type": "code", "execution_count": null, @@ -46,15 +56,17 @@ "\n", "# Define our test glacier (Baltoro)\n", "rgi_ids = ['RGI60-14.06794']\n", + "# change border around the individual glaciers\n", + "cfg.PARAMS['border'] = 80\n", "\n", "# load elevation band representation\n", "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_dynamic_solvers_elevation_bands', reset=True)\n", - "base_url_eb = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + "base_url_eb = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier_spinup'\n", "gdir_eb = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url_eb)[0]\n", "\n", "# load centerline representation\n", "cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_dynamic_solvers_centerlines', reset=True)\n", - "base_url_cl = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + "base_url_cl = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup'\n", "gdir_cl = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url_cl)[0]" ] }, diff --git a/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb b/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb index e5c1b129..83af8a30 100644 --- a/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb +++ b/notebooks/tutorials/observed_thickness_with_dynamic_spinup.ipynb @@ -13,7 +13,7 @@ "id": "1", "metadata": {}, "source": [ - "This notebook demonstrates the translation of thickness observations into bed topography in 'flowline-space' and a dynamic model initialization for projections. Guidelines for adding thickness observations to your glacier directory are provided in the tutorials [Ingest gridded products such as ice velocity into OGGM](../advanced/ingest_gridded_data_on_flowlines.ipynb) and [OGGM-Shop and Glacier Directories in OGGM](../beginner/oggm_shop.ipynb)." + "This notebook demonstrates the translation of thickness observations into bed topography in 'flowline-space' and a dynamic model initialization for projections. Guidelines for adding thickness observations to your glacier directory are provided in the tutorials [Ingest gridded products such as ice velocity into OGGM](../tutorials/ingest_gridded_data_on_flowlines.ipynb) and [OGGM-Shop and Glacier Directories in OGGM](../tutorials/oggm_shop.ipynb)." ] }, { @@ -49,7 +49,7 @@ "id": "4", "metadata": {}, "source": [ - "To convert thickness observations into bed topography, the initial step involves binning the data into elevation bands, as detailed in the tutorial [Ingest gridded products such as ice velocity into OGGM](../advanced/ingest_gridded_data_on_flowlines.ipynb). Fortunately, preprocessed directories are available, encompassing all data supported by the OGGM-Shop, already binned to elevation bands. This enables easy initialization of a dynamic flowline using `tasks.init_present_time_glacier`. Specify the data to be used with `use_binned_thickness_data`. Here, dynamic flowlines are defined using consensus thickness ([Farinotti et al. 2019](https://www.nature.com/articles/s41561-019-0300-3)) and Millan thickness data ([Millan et al. 2022](https://www.nature.com/articles/s41561-021-00885-z))." + "To convert thickness observations into bed topography, the initial step involves binning the data into elevation bands, as detailed in the tutorial [Ingest gridded products such as ice velocity into OGGM](../tutorials/ingest_gridded_data_on_flowlines.ipynb). Fortunately, preprocessed directories are available, encompassing all data supported by the OGGM-Shop, already binned to elevation bands. This enables easy initialization of a dynamic flowline using `tasks.init_present_time_glacier`. Specify the data to be used with `use_binned_thickness_data`. Here, dynamic flowlines are defined using consensus thickness ([Farinotti et al. 2019](https://www.nature.com/articles/s41561-019-0300-3)) and Millan thickness data ([Millan et al. 2022](https://www.nature.com/articles/s41561-021-00885-z))." ] }, { @@ -67,8 +67,10 @@ "rgi_ids = ['RGI60-11.00897']\n", "\n", "# preprocessed directories including shop data and the default oggm dynamic initialisation\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_spinup_w_data/'\n", - "\n", + "## this one would be necessary\n", + "# https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands_w_data/W5E5/per_glacier_spinup/\n", + "# tried instead with that one -->TODO: double-check w. Patrick!!\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands_w_data/W5E5/per_glacier/'\n", "gdirs = workflow.init_glacier_directories(\n", " rgi_ids, # which glaciers?\n", " prepro_base_url=base_url, # where to fetch the data?\n", @@ -160,9 +162,43 @@ "id": "10", "metadata": {}, "source": [ - "Now, we delve into the process of initializing the model using OGGM's dynamic calibration methodology, elaborated in the tutorial [Dynamic spinup and dynamic *melt_f* calibration for past simulations](../advanced/dynamical_spinup.ipynb). In summary, the approach involves dynamically aligning the RGI area by identifying a suitable glacier state in the past and dynamically adjusting the melt factor of the mass balance to match observed geodetic mass balance. With the glacier bed defined by observations, there is an additional capability to dynamically match for the volume (further details below).\n", + "Now, we delve into the process of initializing the model using OGGM's dynamic calibration methodology, elaborated in the tutorial [Dynamic spinup and dynamic *melt_f* calibration for past simulations](../tutorials/dynamical_spinup.ipynb). In summary, the approach involves dynamically aligning the RGI area by identifying a suitable glacier state in the past and dynamically adjusting the melt factor of the mass balance to match observed geodetic mass balance. With the glacier bed defined by observations, there is an additional capability to dynamically match for the volume (further details below).\n", "\n", - "To commence, use the dynamic calibration function `run_dynamic_melt_f_calibration` in conjunction with the newly created flowlines, which can be passed to `init_model_fls`. Refer to the designated tutorial [Dynamic spinup and dynamic *melt_f* calibration for past simulations](../advanced/dynamical_spinup.ipynb) for more details on the other parameters." + "To commence, use the dynamic calibration function `run_dynamic_melt_f_calibration` in conjunction with the newly created flowlines, which can be passed to `init_model_fls`. Refer to the designated tutorial [Dynamic spinup and dynamic *melt_f* calibration for past simulations](../tutorials/dynamical_spinup.ipynb) for more details on the other parameters." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "388d418a3d5ad023", + "metadata": {}, + "outputs": [], + "source": [ + "tasks.run_dynamic_melt_f_calibration" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b50794ed54c905de", + "metadata": {}, + "outputs": [], + "source": [ + "### run the \"default\" one, as our gdir is without _spinup\n", + "# todo --> double-check w. Patrick\n", + "workflow.execute_entity_task(\n", + " tasks.run_dynamic_melt_f_calibration, gdirs,\n", + " init_model_fls=fls_oggm,\n", + " #err_dmdtda_scaling_factor=err_dmdtda_scaling_factor, <- we use here the default one? todo --> double-check w. Patrick\n", + " ys=1979, ye=2020,\n", + " melt_f_max=cfg.PARAMS['melt_f_max'],\n", + " kwargs_run_function={'minimise_for': 'area',\n", + " 'do_inversion': False},\n", + " ignore_errors=True,\n", + " kwargs_fallback_function={'minimise_for': 'area',\n", + " 'do_inversion': False},\n", + " output_filesuffix='_spinup_historical',\n", + ")" ] }, { @@ -429,10 +465,18 @@ "source": [ "## What's next?\n", "\n", - "- Look at the more comprehensive tutorial [Dynamic spinup and dynamic melt_f calibration for past simulations](../advanced/dynamical_spinup.ipynb)\n", + "- Look at the more comprehensive tutorial [Dynamic spinup and dynamic melt_f calibration for past simulations](dynamical_spinup.ipynb)\n", "- return to the [OGGM documentation](https://docs.oggm.org)\n", "- back to the [table of contents](../welcome.ipynb)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "df533387f44b135e", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/notebooks/tutorials/oggm_shop.ipynb b/notebooks/tutorials/oggm_shop.ipynb index cf5b1b88..abc0e090 100644 --- a/notebooks/tutorials/oggm_shop.ipynb +++ b/notebooks/tutorials/oggm_shop.ipynb @@ -502,16 +502,16 @@ "source": [ "# this will download several large datasets (2 times a few 100s of MB)\n", "from oggm.shop import its_live, rgitopo\n", - "workflow.execute_entity_task(rgitopo.select_dem_from_dir, gdirs, dem_source='COPDEM90', keep_dem_folders=True);\n", - "workflow.execute_entity_task(tasks.glacier_masks, gdirs);\n", - "workflow.execute_entity_task(its_live.velocity_to_gdir, gdirs);" + "workflow.execute_entity_task(rgitopo.select_dem_from_dir, gdirs, dem_source='COPDEM90', keep_dem_folders=True)\n", + "workflow.execute_entity_task(tasks.glacier_masks, gdirs)\n", + "workflow.execute_entity_task(its_live.itslive_velocity_to_gdir, gdirs);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "By applying the entity task [its_live.velocity_to_gdir()](https://github.com/OGGM/oggm/blob/master/oggm/shop/its_live.py#L185) the model downloads and reprojects the ITS_live files to a given glacier map. \n", + "By applying the entity task [its_live.itslive_velocity_to_gdir()](https://github.com/OGGM/oggm/blob/89dc7ecc43c3ff4e80166ef9dc2fd3183a4afa5f/oggm/shop/its_live.py#L205) the model downloads and reprojects the ITS_live files to a given glacier map.\n", "\n", "The velocity components (**vx**, **vy**) are added to the `gridded_data` nc file stored on each glacier directory.\n", "\n", @@ -653,7 +653,7 @@ "smap.set_data(ds.consensus_ice_thickness)\n", "smap.set_cmap('Blues')\n", "smap.plot(ax=ax)\n", - "smap.append_colorbar(ax=ax, label='ice thickness (m)');\n", + "smap.append_colorbar(ax=ax, label='ice thickness (m)')\n", "ax.set_title('Farinotti 19 thickness');" ] }, @@ -679,14 +679,14 @@ "source": [ "# this will download several large datasets (3 times a few 100s of MB)\n", "from oggm.shop import millan22\n", - "workflow.execute_entity_task(millan22.velocity_to_gdir, gdirs);" + "workflow.execute_entity_task(millan22.millan_velocity_to_gdir, gdirs);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "By applying the entity task `millan22.add_millan_velocity` the model downloads and reprojects the files to a given glacier map. \n", + "By applying the entity task `millan22.millan_velocity_to_gdir` the model downloads and reprojects the files to a given glacier map.\n", "\n", "The velocity components (**vx**, **vy**) are added to the `gridded_data` nc file stored on each glacier directory. Similar to ITS_LIVE, we make sure to reproject the vectors properly. However, this dataset also provides a velocity map which is gap filled and therefore not strictly equivalent to the `vx` and `vy` vectors. However, we still try to match the original velocity where possible. " ] @@ -774,7 +774,7 @@ "metadata": {}, "outputs": [], "source": [ - "workflow.execute_entity_task(millan22.thickness_to_gdir, gdirs);" + "workflow.execute_entity_task(millan22.millan_thickness_to_gdir, gdirs);" ] }, { @@ -801,7 +801,7 @@ "smap.set_data(millan_ice_thickness)\n", "smap.set_cmap('Blues')\n", "smap.plot(ax=ax)\n", - "smap.append_colorbar(ax=ax, label='ice thickness (m)');\n", + "smap.append_colorbar(ax=ax, label='ice thickness (m)')\n", "ax.set_title('Millan 22 thickness');" ] }, diff --git a/notebooks/tutorials/plot_mass_balance.ipynb b/notebooks/tutorials/plot_mass_balance.ipynb index 1b895869..904c3a1c 100644 --- a/notebooks/tutorials/plot_mass_balance.ipynb +++ b/notebooks/tutorials/plot_mass_balance.ipynb @@ -66,10 +66,9 @@ "# (The second glacier here we will only use in the second part of this notebook.)\n", "# in OGGM v1.6 you have to explicitly indicate the url from where you want to start from\n", "# we will use here the centerlines to actually look at different flowlines\n", - "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.1/centerlines/W5E5/')\n", + "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup')\n", "gdirs = workflow.init_glacier_directories(['RGI60-11.00897', 'RGI60-11.00001'], from_prepro_level=4,\n", - " prepro_base_url=DEFAULT_BASE_URL)\n", + " prepro_base_url=base_url)\n", "# when using centerlines, the new default `SemiImplicit` scheme does not work at the moment,\n", "# we have to use the `FluxBased` scheme instead:\n", "cfg.PARAMS['evolution_model'] = 'FluxBased'\n", @@ -119,8 +118,8 @@ "years = np.arange(1950, 2019)\n", "mb_ts = mbmod.get_specific_mb(fls=fls, year=years)\n", "\n", - "plt.plot(years, mb_ts);\n", - "plt.ylabel('Specific MB (mm w.e.)');\n", + "plt.plot(years, mb_ts)\n", + "plt.ylabel('Specific MB (mm w.e.)')\n", "plt.xlabel('Year');" ] }, @@ -142,9 +141,9 @@ "# Get OGGM MB at the same dates\n", "ref_df['OGGM'] = mbmod.get_specific_mb(fls=fls, year=ref_df.index.values)\n", "# Plot\n", - "plt.plot(ref_df.index, ref_df.OGGM, label='OGGM');\n", - "plt.plot(ref_df.index, ref_df.ANNUAL_BALANCE, label='WGMS');\n", - "plt.ylabel('Specific MB (mm w.e.)'); plt.legend();\n", + "plt.plot(ref_df.index, ref_df.OGGM, label='OGGM')\n", + "plt.plot(ref_df.index, ref_df.ANNUAL_BALANCE, label='WGMS')\n", + "plt.ylabel('Specific MB (mm w.e.)'); plt.legend()\n", "plt.xlabel('Year');" ] }, @@ -230,7 +229,7 @@ "mb2 = mbmod.get_annual_mb(heights, year=2001, fl_id=0) * cfg.SEC_IN_YEAR * cfg.PARAMS['ice_density'] \n", "np.testing.assert_allclose(mb, mb2)\n", "# Plot\n", - "plt.plot(mb, heights, label='2001');\n", + "plt.plot(mb, heights, label='2001')\n", "plt.ylabel('Elevation (m a.s.l.)'); plt.xlabel('MB (mm w.e. yr$^{-1}$)'); plt.legend();" ] }, @@ -413,10 +412,10 @@ "# Specific MB over time (i.e. with changing geometry feedbacks)\n", "smb = (ds_diag.volume_m3.values[1:] - ds_diag.volume_m3.values[:-1]) / ds_diag.area_m2.values[1:]\n", "smb = smb * cfg.PARAMS['ice_density'] # in mm\n", - "plt.plot(ds_diag.time[:-1], smb, label='OGGM (dynamics)'); \n", + "plt.plot(ds_diag.time[:-1], smb, label='OGGM (dynamics)')\n", "# The SMB from WGMS and fixed geometry we already have\n", - "plt.plot(ref_df.loc[2004:].index, ref_df.loc[2004:].OGGM, label='OGGM (fixed geom)');\n", - "plt.plot(ref_df.loc[2004:].index, ref_df.loc[2004:].ANNUAL_BALANCE, label='WGMS');\n", + "plt.plot(ref_df.loc[2004:].index, ref_df.loc[2004:].OGGM, label='OGGM (fixed geom)')\n", + "plt.plot(ref_df.loc[2004:].index, ref_df.loc[2004:].ANNUAL_BALANCE, label='WGMS')\n", "plt.legend();" ] }, @@ -480,7 +479,7 @@ " smb = np.append(smb, mb * cfg.SEC_IN_YEAR * cfg.PARAMS['ice_density'] )\n", " plt.plot(smb, h, '.', label=year)\n", "\n", - "plt.legend(title='year');\n", + "plt.legend(title='year')\n", "plt.ylabel('Elevation (m a.s.l.)'); plt.xlabel('MB (mm w.e. yr$^{-1}$)');" ] }, @@ -546,7 +545,7 @@ "ela_df = pd.read_hdf(os.path.join(cfg.PATHS['working_dir'], 'ELA.hdf'))\n", "\n", "# Plot it\n", - "ela_df.plot();\n", + "ela_df.plot()\n", "plt.xlabel('year'); plt.ylabel('ELA [m]');" ] }, @@ -569,7 +568,7 @@ "avg['weighted average'] = np.average(ela_df, axis=1, weights=areas)\n", "avg['median'] = np.median(ela_df, axis=1)\n", "\n", - "avg.plot();\n", + "avg.plot()\n", "plt.xlabel('year'); plt.ylabel('ELA [m]');" ] }, @@ -669,7 +668,7 @@ "metadata": {}, "outputs": [], "source": [ - "ax = ela_df_long.plot();\n", + "ax = ela_df_long.plot()\n", "ela_df_long.rolling(5).mean().plot(ax=ax, lw=2, color='k', label='')\n", "plt.xlabel('year CE'); plt.ylabel('ELA [m]'); ax.legend([\"Annual\", \"5-yr average\"]);" ] @@ -714,7 +713,7 @@ "metadata": {}, "outputs": [], "source": [ - "ela_yrs.sort_index(ascending=True).plot();\n", + "ela_yrs.sort_index(ascending=True).plot()\n", "plt.xlabel('year CE'); plt.ylabel('ELA [m]');" ] }, diff --git a/notebooks/tutorials/preprocessing_errors.ipynb b/notebooks/tutorials/preprocessing_errors.ipynb index c96687a0..5375b8ab 100644 --- a/notebooks/tutorials/preprocessing_errors.ipynb +++ b/notebooks/tutorials/preprocessing_errors.ipynb @@ -53,7 +53,7 @@ "outputs": [], "source": [ "# W5E5 centerlines\n", - "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/RGI62/b_080/L5/summary/'" + "url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/RGI62/b_080/L5/summary/' #todo-wait-until-gdir-ready" ] }, { diff --git a/notebooks/tutorials/rgitopo_rgi6.ipynb b/notebooks/tutorials/rgitopo_rgi6.ipynb index 92984436..121e3242 100644 --- a/notebooks/tutorials/rgitopo_rgi6.ipynb +++ b/notebooks/tutorials/rgitopo_rgi6.ipynb @@ -568,16 +568,16 @@ " ax.set_title(s1 + '-' + s2, fontsize=8)\n", " \n", "cax = grid.cbar_axes[0]\n", - "smap.colorbarbase(cax);\n", + "smap.colorbarbase(cax)\n", "\n", - "plt.savefig(os.path.join(plot_dir, 'dem_diffs.png'), dpi=150, bbox_inches='tight')" + "plt.savefig(os.path.join(plot_dir, 'dem_diffs.png'), dpi=150, bbox_inches='tight');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Comparison scatter plot " + "## Comparison scatter plot" ] }, { @@ -603,17 +603,17 @@ "\n", "def plot_unity():\n", " points = np.linspace(l1, l2, 100)\n", - " plt.gca().plot(points, points, color='k', marker=None,\n", - " linestyle=':', linewidth=3.0)\n", + " plt.gca().plot(points, points, marker=None,\n", + " linestyle=':', linewidth=3.0, color='k')\n", "\n", - "g = sns.pairplot(df.dropna(how='all', axis=1).dropna(), plot_kws=dict(s=50, edgecolor=\"C0\", linewidth=1));\n", + "g = sns.pairplot(df.dropna(how='all', axis=1).dropna(), plot_kws=dict(s=50, edgecolor=\"C0\", linewidth=1))\n", "g.map_offdiag(plot_unity)\n", "for asx in g.axes:\n", " for ax in asx:\n", " ax.set_xlim((l1, l2))\n", " ax.set_ylim((l1, l2))\n", "\n", - "plt.savefig(os.path.join(plot_dir, 'dem_scatter.png'), dpi=150, bbox_inches='tight')" + "plt.savefig(os.path.join(plot_dir, 'dem_scatter.png'), dpi=150, bbox_inches='tight');" ] }, { diff --git a/notebooks/tutorials/rgitopo_rgi7.ipynb b/notebooks/tutorials/rgitopo_rgi7.ipynb index 9204dbc4..7682f82c 100644 --- a/notebooks/tutorials/rgitopo_rgi7.ipynb +++ b/notebooks/tutorials/rgitopo_rgi7.ipynb @@ -188,9 +188,9 @@ "outputs": [], "source": [ "# URL of the preprocessed GDirs\n", - "gdir_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/rgitopo/2023.1/'\n", + "gdir_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/rgitopo/2025.4'\n", "# We use OGGM to download the data\n", - "gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10, prepro_rgi_version='70', prepro_base_url=gdir_url)[0]" + "gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10, prepro_rgi_version='70G', prepro_base_url=gdir_url)[0]" ] }, { @@ -434,7 +434,7 @@ " cbar_pad=0.1\n", " )\n", "\n", - "smap.set_topography();\n", + "smap.set_topography()\n", "smap.set_plot_params(vmin=0, vmax=0.7, cmap='Blues')\n", "\n", "for i, s in enumerate(sources):\n", @@ -453,7 +453,7 @@ " grid[-2].remove()\n", " grid[-2].cax.remove()\n", "\n", - "plt.savefig(os.path.join(plot_dir, 'dem_slope.png'), dpi=150, bbox_inches='tight')" + "plt.savefig(os.path.join(plot_dir, 'dem_slope.png'), dpi=150, bbox_inches='tight'):" ] }, { diff --git a/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb b/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb index 0b13a047..620b20df 100644 --- a/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb +++ b/notebooks/tutorials/run_with_a_spinup_and_gcm_data.ipynb @@ -64,8 +64,7 @@ "# Go - initialize glacier directories\n", "# in OGGM v1.6 you have to explicitly indicate the url from where you want to start from\n", "# we will use here the elevation band flowlines which are much simpler than the centerlines\n", - "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.3/elev_bands/W5E5/')\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", "gdirs = workflow.init_glacier_directories(['RGI60-11.00897'], from_prepro_level=5,\n", " prepro_base_url=base_url)" ] @@ -132,7 +131,7 @@ "# Run the spinup simulation: a rather \"cold\" climate with a cold temperature bias\n", "execute_entity_task(tasks.run_constant_climate, gdirs, y0 = 1965,\n", " nyears=100, bias=0, \n", - " output_filesuffix='_spinup');\n", + " output_filesuffix='_spinup')\n", "# Run a past climate run based on this spinup\n", "execute_entity_task(tasks.run_from_climate_data, gdirs,\n", " climate_filename='gcm_data',\n", diff --git a/notebooks/tutorials/store_and_compress_glacierdirs.ipynb b/notebooks/tutorials/store_and_compress_glacierdirs.ipynb index 1096df39..20de0f8b 100644 --- a/notebooks/tutorials/store_and_compress_glacierdirs.ipynb +++ b/notebooks/tutorials/store_and_compress_glacierdirs.ipynb @@ -74,8 +74,7 @@ "rgi_ids = utils.get_rgi_glacier_entities(['RGI60-11.00897', 'RGI60-11.00787'])\n", "\n", "# Go - get the pre-processed glacier directories\n", - "base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/'\n", - " 'L3-L5_files/2023.3/elev_bands/W5E5/')\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url)" ] }, @@ -259,7 +258,7 @@ }, "outputs": [], "source": [ - "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=False);\n", + "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=False)\n", "file_tree_print()" ] }, @@ -278,7 +277,7 @@ }, "outputs": [], "source": [ - "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=True);\n", + "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=True)\n", "file_tree_print()" ] }, @@ -346,7 +345,7 @@ "outputs": [], "source": [ "# Tar the individual ones first\n", - "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=True);\n", + "workflow.execute_entity_task(utils.gdir_to_tar, gdirs, delete=True)\n", "# Then tar the bundles\n", "utils.base_dir_to_tar(WORKING_DIR, delete=True)\n", "file_tree_print()" diff --git a/notebooks/tutorials/use_your_own_inventory.ipynb b/notebooks/tutorials/use_your_own_inventory.ipynb index 9fef23d0..c27f7aba 100644 --- a/notebooks/tutorials/use_your_own_inventory.ipynb +++ b/notebooks/tutorials/use_your_own_inventory.ipynb @@ -386,12 +386,12 @@ "metadata": {}, "outputs": [], "source": [ - "workflow.execute_entity_task(tasks.define_glacier_region, gdirs);\n", - "workflow.execute_entity_task(tasks.glacier_masks, gdirs);\n", - "workflow.execute_entity_task(tasks.compute_centerlines, gdirs);\n", - "workflow.execute_entity_task(tasks.initialize_flowlines, gdirs);\n", - "workflow.execute_entity_task(tasks.catchment_area, gdirs);\n", - "workflow.execute_entity_task(tasks.catchment_width_geom, gdirs);\n", + "workflow.execute_entity_task(tasks.define_glacier_region, gdirs)\n", + "workflow.execute_entity_task(tasks.glacier_masks, gdirs)\n", + "workflow.execute_entity_task(tasks.compute_centerlines, gdirs)\n", + "workflow.execute_entity_task(tasks.initialize_flowlines, gdirs)\n", + "workflow.execute_entity_task(tasks.catchment_area, gdirs)\n", + "workflow.execute_entity_task(tasks.catchment_width_geom, gdirs)\n", "workflow.execute_entity_task(tasks.catchment_width_correction, gdirs);" ] }, @@ -478,8 +478,8 @@ "source": [ "cfg.initialize(logging_level='WARNING')\n", "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='rgi-case-2-example', reset=True)\n", - "cfg.PARAMS['border'] = 10\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n", + "cfg.PARAMS['border'] = 80 # previously 10, but this preprocessed gdir is not anymore available\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/elev_bands/W5E5/per_glacier/'\n", "gdirs = workflow.init_glacier_directories(['RGI60-01.03890', 'RGI60-01.23664'], from_prepro_level=3, prepro_base_url=base_url, reset=True, force=True)\n", "graphics.plot_googlemap(gdirs, figsize=(6, 6));" ] @@ -759,6 +759,7 @@ "gdirs = workflow.init_glacier_directories(rgidf_simple)\n", "\n", "# The tasks below require downloading new data - we comment them for the tutorial, but it should work for you!\n", + "# todo--> these tasks below are old, they need to be updated to oggm v1.6\n", "# workflow.gis_prepro_tasks(gdirs)\n", "# workflow.download_ref_tstars('https://cluster.klima.uni-bremen.de/~oggm/ref_mb_params/oggm_v1.4/RGIV62/CRU/centerlines/qc3/pcp2.5')\n", "# workflow.climate_tasks(gdirs)\n", diff --git a/notebooks/tutorials/where_are_the_flowlines.ipynb b/notebooks/tutorials/where_are_the_flowlines.ipynb index 7f4cc4d9..d0162038 100644 --- a/notebooks/tutorials/where_are_the_flowlines.ipynb +++ b/notebooks/tutorials/where_are_the_flowlines.ipynb @@ -65,7 +65,7 @@ "# Which glaciers?\n", "rgi_ids = ['RGI60-11.00897']\n", "# We start from prepro level 3 with all data ready\n", - "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n", + "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2025.6/centerlines/W5E5/per_glacier_spinup/'\n", "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_base_url=base_url, prepro_border=80)\n", "gdir = gdirs[0]\n", "gdir" @@ -411,7 +411,7 @@ }, "outputs": [], "source": [ - "ds.climatic_mb_myr.sel(time=[1, 10, 20, 100]).plot(hue='time');" + "ds.climatic_mb.sel(time=[1, 10, 20, 100]).plot(hue='time');" ] }, { @@ -422,7 +422,7 @@ }, "outputs": [], "source": [ - "ds.dhdt_myr.sel(time=[1, 10, 20, 100]).plot(hue='time');" + "ds.dhdt.sel(time=[1, 10, 20, 100]).plot(hue='time');" ] }, { @@ -433,7 +433,7 @@ }, "outputs": [], "source": [ - "ds.flux_divergence_myr.sel(time=[1, 10, 20, 100]).plot(hue='time');" + "ds.flux_divergence.sel(time=[1, 10, 20, 100]).plot(hue='time');" ] }, { @@ -686,7 +686,7 @@ "metadata": {}, "outputs": [], "source": [ - "df_thick[[0, 50, 100]].plot();\n", + "df_thick[[0, 50, 100]].plot()\n", "plt.title('Ice thickness at three points in time')\n", "plt.ylabel('Glacier thickness (m)')\n", "plt.legend(title='Year');" @@ -699,8 +699,8 @@ "outputs": [], "source": [ "f, ax = plt.subplots()\n", - "df_surf_h[[0, 50, 100]].plot(ax=ax);\n", - "df_coords['bed_elevation'].plot(ax=ax, color='k');\n", + "df_surf_h[[0, 50, 100]].plot(ax=ax)\n", + "df_coords['bed_elevation'].plot(ax=ax, color='k')\n", "plt.title('Glacier elevation at three points in time')\n", "plt.legend(title='Year');" ]