Environment Setup¶
To run the exercises described in this tutorial, you can use a pre-built JEDI installation as described in the section below.
If you plan to modify the JEDI code you will need to clone and build JEDI yourself. DISCOVER users can use the jedi_bundle tool to build JEDI on DISCOVER with the latest Spack-Stack modules by following instructions here.
Using pre-built JEDI¶
Load Spack Stack Intel 1.9 on Discover by executing:
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
You may see warnings about the crtm and gsibec (below). It is safe to ignore them for the tutorials:
-------------------------------------------------------------------------------------------------------------
The following dependent module(s) are not currently loaded: crtm/v2.4-jedi.2 (required by: gmao-swell-env/1.0.0), gsibec/1.2.1 (required by: jedi-base-env/1.0.0)
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
The following dependent module(s) are not currently loaded: crtm/v2.4-jedi.2 (required by: gmao-swell-env/1.0.0), fms/2024.02 (required by: jedi-fv3-env/1.0.0, gmao-swell-env/1.0.0), gsibec/1.2.1 (required by: jedi-base-env/1.0.0)
-------------------------------------------------------------------------------------------------------------A pre-build version of the code here:
/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/bin
Next, set two environment variables JEDI_BUILD to point to the a pre-built JEDI bin directory and MPIEXEC:
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/binRun JEDI executables¶
To run the fv3-jedi executables, you need to use MPI and a number of processors that is a multiple of six. On DISCOVER, you can either submit the job through Slurm or request an interactive node and run the practical examples there. The examples in this tutorial use low-resolution input files, so users can easily run them with 6 processors. For higher-resolution cases, you may increase the number of processors as needed, just make sure that the processor layout is correctly specified in your configuration files.
Option 1: Submit a job¶
Create and modify the following shell script <myjedijob>.sh.
#!/bin/bash
#SBATCH --job-name=<your job name>
#SBATCH --partition=compute
#SBATCH --qos=allnccs
#SBATCH --account=<your_allocation_group>
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=6
#SBATCH --time=00:30:00
#SBATCH --constraint=mil
#SBATCH --output=jedi_%j.out
#SBATCH --error=jedi_%j.err
#Load the environment
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/bin
# Run JEDI (edit below)
$MPIEXEC -n 6 $JEDI_BUILD/<jedi executable>.x <your application yaml>.yamlThen submit the shell script with:
sbatch <myjedijob>.shOption 2: Run an interactive job¶
salloc --partition=compute --qos=allnccs --account=<your_allocation_group> --job-name=interactive --nodes=1 --ntasks-per-node=6 --time=0:30:00 --constraint=milNote: Make sure you do not load any additional modules that may conflict with the Spack Stack modules. For example, if you have any module load commands in your .bashrc or .cshrc, remove or disable them, and purge all currently loaded modules before loading the Spack Stack modules.
After logging on the interative node, load the modules and set the environment:
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/binTo run the JEDI executable for your application:
$MPIEXEC "-n" "6" $JEDI_BUILD/<jediexecutable>.x <yourapplicationyaml>.yaml 2>&1 | tee <alogfile>.txt