Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/user_guides/ensembles.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ Ensemble simulations are useful when you need to:

The ensemble functionality automatically generates configuration files for each simulation instance, distributes the workload across available compute nodes, and manages the execution of all ensemble members.

If interaction with the `IPS Portal <portal_guides.html>`_ is enabled, the ensemble system will also handle interaction with the IPS Portal automatically.

Method Signature
----------------

Expand Down
11 changes: 11 additions & 0 deletions doc/user_guides/jupyter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,17 @@ The IPS Portal will generate a cell prior to your own notebook which initializes
- ``ips_analysis_api.get_child_data_not_ensembles()`` - get the child runid mapping as described above, but only use child runids NOT associated with ensembles.
- ``ips_analysis_api.get_child_data_by_ensemble_names()`` - gets the child runid mapping as described above, but will only retrieve child runids associated with ensembles. You can further filter this by ensemble name by providing an optional list of component names and an optional list of ensemble names; for example, ``ips_analysis_api.get_child_data_by_ensemble_names(ensemble_names=['ensemble_name_1', 'ensemble_name_2'])`` will ONLY fetch the child runids associated with 'ensemble_name_1' and 'ensemble_name_2', but will search all components for this.

An example of the "generic IPS mapping" generated by these functions can look like this:

.. code-block:: python

{
0.0: ['/path/to/data/file1.json'],
1.0: ['/path/to/data/file2.json'],
2.0: ['/path/to/data/file3.json', '/path/to/data/file4.json'], # note that there can be multiple files per timestamp
# ...
}

**IPS Notebook Analysis API Reference**

.. autoclass:: doc.reference.portal_jupyter_api.ips_analysis_api_v1.IPSAnalysisApi
Expand Down
57 changes: 53 additions & 4 deletions doc/user_guides/portal_guides.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,22 @@ time, and a descriptive comment. From there you can click on a Run ID
to see the details of that run, including calls on components, data
movement events, task launches and finishes, and checkpoints.

To use the portal include
To use the portal on your local cluster, include the following variables in your configuration file or as environment variables:

.. code-block:: text

USE_PORTAL = True
# stable version
PORTAL_URL = http://lb.ipsportal.production.svc.spin.nersc.org
# or, for the latest version
# PORTAL_URL = http://lb.ipsportal.development.svc.spin.nersc.org
# The API key is required for certain interactions with the portal, and will eventually become mandatory. This key should generally be set as an environment variable, and not saved to version control.
PORTAL_API_KEY = "YOUR_PORTAL_API_KEY" # change this
# To disable the portal even if PORTAL_URL is set, uncomment the next line.
# USE_PORTAL = false

NOTE: On shared clusters, i.e. Perlmutter, there will generally be specific files that you can source in Slurm scripts which will automatically configure the Portal credentials for you, so you can skip settings these variables yourself. Please see the appropriate project documentation for information on how to configure this.

The source code for the portal can be found one `GitHub
The source code for the portal can be found on `GitHub
<https://github.com/HPC-SimTools/IPS-portal>`_ and issues can be
reported using `GitHub issues
<https://github.com/HPC-SimTools/IPS-portal/issues>`_.
Expand Down Expand Up @@ -91,7 +98,7 @@ like:
child_conf['PARENT_PORTAL_RUNID'] = self.services.get_config_param("PORTAL_RUNID")

This is automatically configured when running
``ips_dakota_dynamic.py``.
``ips_dakota_dynamic.py`` or when using the ``run_ensemble`` API.

The child runs will not appear on the main runs list but will appear
on a tab next to the events.
Expand All @@ -102,3 +109,45 @@ The trace of the primary simulation will contain the traces from all
the simulations:

.. image:: child_runs_trace.png

IPS-Framework APIs
------------------

Provided that the Portal has been enabled, the following APIs allow for interaction with the web portal:

Events API
==========

.. automethod:: ipsframework.services.ServicesProxy.send_portal_event
:noindex:

This function can be used to send custom events to the Portal if the Portal is enabled; if the Portal is not enabled, the event will still be logged locally.

The events API does not currently require an API key to utilize, but this is expected to change in the future.

Jupyter API
===========

.. automethod:: ipsframework.services.ServicesProxy.initialize_jupyter_notebook
:noindex:

.. automethod:: ipsframework.services.ServicesProxy.add_analysis_data_files
:noindex:

All Jupyter APIs require that the Portal API key is set in order to utilize them.

If calling either of these APIs when the Portal is disabled, a warning will be logged and the call will be skipped.

Please see the `Jupyter <jupyter.html>`_ page for specifics on using the Jupyter APIs.

Ensembles API
=============

.. automethod:: ipsframework.services.ServicesProxy.run_ensemble
:noindex:

If the Portal is enabled and the API key has been set, ``run_ensemble`` will automatically interact with the Portal to send appropriate files.

If the Portal is disabled, ``run_ensemble`` will skip the Portal interaction, but will otherwise behave normally.

Please see the `Ensembles <ensembles.html>`_ page for specifics on using the Ensembles API.
Loading