Environment Setup¶
To run the exercises described in this tutorial, you can use a pre-built JEDI installation as described in the section below.
If you plan to modify the JEDI code you will need to clone and build JEDI yourself. DISCOVER users can use the jedi_bundle tool to build JEDI on DISCOVER with the latest Spack-Stack modules by following instructions here.
Using pre-built JEDI¶
Load Spack Stack Intel 1.9 on Discover by executing:
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
Point to a pre-build version of the code here:
/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/bin
Next, set these two environment variables:
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/binRun JEDI executables¶
To run the fv3-jedi executables, you need to use MPI and a number of processors that is a multiple of six. On DISCOVER, you can either submit the job through Slurm or request an interactive node and run the practical examples there. The examples in this tutorial use low-resolution input files, so users can easily run them with 6 processors. For higher-resolution cases, you may increase the number of processors as needed, just make sure that the processor layout is correctly specified in your configuration files.
Option 1: Submit a job¶
Create and modify the following shell script <myjedijob>.sh.
#!/bin/bash
#SBATCH --job-name=<your job name>
#SBATCH --partition=compute
#SBATCH --qos=allnccs
#SBATCH --account=<your_allocation_group>
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=6
#SBATCH --time=00:05:00
#SBATCH --constraint=mil
#SBATCH --output=jedi_%j.out
#SBATCH --error=jedi_%j.err
#Load the environment
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/bin
# Run JEDI (edit below)
$MPIEXEC -n 6 $JEDI_BUILD/<jedi executable>.x <your application yaml>.yamlThen submit the shell script with:
sbatch <myjedijob>.shOption 2: Run an interactive job¶
salloc --partition=compute --qos=allnccs --account=<your_allocation_group> --job-name=interactive --nodes=1 --ntasks-per-node=6 --time=0:05:00 --constraint=milAnd then load the environment:
source /discover/nobackup/projects/gmao/advda/swell/jedi_modules/spackstack_1.9_intel
export MPIEXEC=/usr/local/intel/oneapi/2021/mpi/2021.10.0/bin/mpiexec
export JEDI_BUILD=/discover/nobackup/projects/jcsda/s2127/maryamao/geos-esm/jedi-work/build-intel-release/binTo run the JEDI executable for your application:
$MPIEXEC "-n" "6" $JEDI_BUILD/<jediexecutable>.x <yourapplicationyaml>.yaml 2>&1 | tee <alogfile>.txt