This page will give you an overview of the Cray programming environment that is available on LUMI. It starts with a presentation of the compiler suites and compiler wrappers that you can use to compile your C, C++ or Fortran code. Finally, some basic information on how to compile an MPI or OpenMP program is given.
On LUMI, the different compiler suites are accessible using module collections. These collections load the appropriates modules to use one of the supported programming environments for LUMI.
Switching compiler suites¶
The compiler collections are accessible through modules and in
module load command:
<name> is the name of the compiler suite. There are 3 collections
available on LUMI. The default collection is Cray.
|CCE||Cray Compiling Environment||
|AMD||AMD ROCm compilers||
|GCC||GNU Compiler Collection||
|AOCC||AMD Optimizing C/C++ Compiler||
For example, if you want to use the GNU’s compiler collection:
After you have loaded a programming environment, the compiler wrappers
ftn) are available.
PrgEnv-aocc broken in 21.08 and 21.12
PrgEnv-aocc module does not work correctly in the 21.08 and 21.12
releases of the Cray programming environment. This is due to different
aocc/3.0.0 module (used as the default version of AOCC in
the 21.08 release) is broken since the compilers themselves are not
aocc/3.1.0 module has a bug in the code of the module.
This has been fixed in later releases of the Crey programming environment
so that the problem will be solved when those releases are installed. Due to
the way the installation of the Cray programming environment works, it is
currently not possible for us to correct the module by hand.
Changing compiler versions¶
If the default compiler version does not suit you, you can change the version
after having a loaded a programming environment. This operation is performed
module swap command.
<compiler> is the name of the compiler module for the loaded
programming environment and
<version> the version you want to use. For
The module collection provides wrappers to the C, C++ and Fortran compilers. The command used to invoke these wrappers are listed below.
cc: C compiler
CC: C++ compiler
ftn: Fortran compiler
No matter which vendor's compiler module is loaded, always use one of the above commands to invoke the compiler. Using these wrappers will invoke the underlying compiler according to the compiler suite that is loaded in the environment. For some libraries, the appropriate option for the linking will also be included. See here for more information.
About MPI Wrappers
The Cray compiler wrappers replace other wrappers commonly found on HPC
systems like the
mpif90 wrappers. You don't need to
use these wrappers to compile an MPI code on LUMI. See here.
Below are examples on how to use the wrappers for the different programming languages.
In the example above, no additional options are provided. However, in most cases this is not the case and the arguments used with the commands vary according to which compiler module is loaded. For example, the arguments and options supported by the GNU Fortran compiler are different from those supported by the Cray Fortran compiler.
Wrapper and compiler options¶
The following flags are a good starting point to achieve good performance:
|Compilers||Good performance||Aggressive optimizations|
Detailed information about the available compiler options are available here:
The man pages of the wrappers and of the underlying compilers are also a good place to explore the options. The command to access the man pages are presented in the table below.
Choosing the target architecture¶
When using the Cray programming environment, there is no need to specify
compiler flags to target specific CPU architecture, like
in GCC or
--offload-arch for GPU compilation. Instead, you load an appropriate
combination of modules to choose the target architecture when compiling.
These modules influence the optimizations performed by the compiler, as well as
the libraries (e.g., which BLAS routines are used in Cray LibSci) used. Here is a
list of the relevant CPU target module available on LUMI:
craype-x86-trento: GPU partition GPUs (LUMI-G)
craype-x86-milan: CPU partition CPUs (LUMI-C)
craype-x86-rome: Login nodes and data analytics partition CPUs (LUMI-D)
We recommend that you compile with
craype-x86-trento for LUMI-G and
craype-x86-milan for LUMI-C, even if the compiler optimizations for these
processors are immature at the moment. You have to load these module yourself
when compiling your code from a login node as the default module is
In addition to the
craype-x86-* modules for the CPUs,
can be used to specify the target GPU architecture. Here is a list of the
craype-accel-amd-gfx90a: GPU partition GPUs (AMD MI250x, LUMI-G)
craype-accel-nvidia80: data analytics and visualization GPUs (NVIDIA A40, LUMI-D)
Loading one of these modules will instruct the compiler wrappers to add the
necessary flags to optimize for the target GPU architecture. In addition, loading
craype-accel-* module will enable the linking to the GPU transfer library
(GTL) used for GPU-aware MPI as well as enabling OpenMP target offload.
The wrapper will pass the appropriate linking information to the compiler and
linker for libraries accessible via modules prefixed by
cray-. These libraries don't require user-provided options
in order to be linked. For other libraries, the user should provide the
appropriate include (
-I) and library (
-L) search paths as well as linking
If you have used a Cray system in the past, you may be familiar with the legacy linking behavior of the Cray compiler wrappers. Historically, the wrappers built statically linked executables. In recent versions of the Cray programming environment, this is not the case anymore, libraries are now dynamically linked. The following options are available to you to control the behavior of your application
- Follow the default Linux policy and at runtime use the system default version of the shared libraries (so may change as and when the system is upgraded)
- Hard code the path of each library into the binary at compile time so that a
specific version is loaded when the application start (as long as the library
is still installed). Set
CRAY_ADD_RPATH=yesat compile time to use this mode.
- Allow the currently loaded programming environment modules to select the
library version at runtime. Applications must not be linked with
CRAY_ADD_RPATH=yesand must add the following line to the Slurm script:
Static linking is unsupported by Cray at the moment.
Using the wrappers with build systems¶
In order to compile an application that uses a series of
make install commands, you can pass the compiler wrappers in the
appropriate environment variables. This should be sufficient for a configure
step to succeed.
CMake should automatically detect the Cray environment. If you want to be on the safe side, you can explicitly provide the compilers wrappers at configure time using the flags
cmake \ -DCMAKE_C_COMPILER=cc \ -DCMAKE_CXX_COMPILER=CC \ -DCMAKE_Fortran_COMPILER=ftn \ <other options>
For other tools, you can try to export environment variables so that the tool you are using is aware of the wrappers.
Compile HIP Code¶
Using the compiler wrapper¶
PrgEnv-amd programming environments can compile HIP
code using the compiler wrapper. The advantage of using the wrapper is that the
flags to use the Cray libraries (
cray-* modules) are automatically added
Unlike the compiler wrappers,
hipcc do not add automatically the flags to use
the Cray libraries (
cray-* modules). Loading a
craype-accel-* will have
no effect as well, i.e., you need to specify the target GPU architecture
Still, you can set the value of
HIPCC_LINK_FLAGS_APPEND environment variables to make
hipcc behave like the
Cray compiler wrappers.
module load PrgEnv-amd export HIPCC_COMPILE_FLAGS_APPEND="--offload-arch=gfx90a $(CC --cray-print-opts=cflags)" export HIPCC_LINK_FLAGS_APPEND=$(CC --cray-print-opts=libs) hipcc -o <yourapp> <hip_source.cpp>
Compile an MPI Program¶
When you load a programming environment, the appropriate MPI module is loaded in
cray-mpich. In addition the
target module should be loaded. These two modules are loaded by default when you
login to LUMI.
Compiling an MPI application is done using the set of compiler wrappers
ftn). The wrappers will automatically link codes with the MPI
libraries. You can see the compiler wrappers as the more the familiar
If you are using a build system that uses a
configure script, you may need to
provide the appropriate variables so that the correct wrapper is used.
For CMake, if you already provided the compiler as described in the previous section, CMake should correctly select the wrappers as the MPI compilers.
If your application requires a GPU-aware MPI implementation, i.e., pass GPU
memory pointers directly to MPI without copy to the host first, then you need to
link your code to the GPU Transfer Library (GTL). The compiler wrappers will
link automatically to this library if a GPU target module (
Then, for example, we can compile a simple MPI + HIP code with the following command
and inspect the linking of the resulting executable. That will show that both the MPI and GPU transfer libraries are linked
$ ldd ./yourapp | grep libmpi libmpi_cray.so.12 => /opt/cray/pe/lib64/libmpi_cray.so.12 libmpi_gtl_hsa.so.0 => /opt/cray/pe/lib64/libmpi_gtl_hsa.so.0
GPU support need to be enabled at run time
When your application, you need to enable the GPU support. This is done by
setting the value of
Compile an OpenMP Application¶
For all programming environments, the compilation of OpenMP host code is possible by
enabling OpenMP when invoking the compiler wrappers (
flag to enable OpenMP is
-fopenmp for all programming environments and
When using the OpenMP compiler flag, the wrapper will link to the multithreaded version of the Cray libraries.
Compile an application with OpenMP offloading¶
PrgEnv-amd, you can compile application using OpenMP
target offloading. Like for OpenMP for the host (CPU), this is done by using the
-fopenmp flag but first you need to load a
craype-accel-* target module.
craype-accel-amd-gfx90a will instruct the compiler wrappers to
automatically add the appropriate flags for OpenMP offloading.
Compile an OpenACC application¶
At the moment, the only compiler that supports OpenACC compilation on LUMI is the
Cray Fortran compiler. OpenACC can enabled by the
module load PrgEnv-cray module load craype-accel-amd-gfx90a module load rocm ftn -hacc -o <yourapp> <openacc_source.f90>
Accessing the programming environment on LUMI¶
The Cray programming environment can be accessed in three different ways on LUMI:
Right after login,
PrgEnv-crayis loaded as most users familiar with Cray systems would expect. The set of target modules is not adapted to the node that you are on but a set that is safe for the whole system. Users are responsible for managing those modules and swapping with an appropriate set. Executing
module purgewill unload the target modules also and cause error messages when you subsequently try to load a programming environment as some modules (including
cray-fftw) can only be loaded when a suitable target module is loaded.
Working in the
CrayEnvsoftware stack: (Re)-loading the
CrayEnvmodule will (re)set the target modules to an optimal set for the node type that you are on. Executing
module purgewill also trigger a reload of
CrayEnv, unless the
--forceoption is used to unload the module.
CrayEnvstack also provides an updated set of build tools and some other tools useful to programmers in a way that they cannot conflict with tools in the
LUMIsoftware stacks (which is why they are not offered in the bare environment).
We advise users who want to use the Cray programming environment but do not need any of the libraries etc. installed in the
LUMIsoftware stacks to use the
CrayEnvstack rather than the bare environment.
Working in the
LUMIsoftware stack: The
LUMIsoftware stack offers a range of libraries and packages mostly installed via EasyBuild. It is possible to install additional software on top of those stacks using EasyBuild, and use those libraries and tools to compile or develop other software outside the EasyBuild environment.
LUMIstack corresponds to a particular release of the Cray programming environment. It is possible to use the
PrgEnvmodules in this environment. However, EasyBuild requires its own set of modules to integrate with the Cray programming environment and we advise users to use those instead when working in the
PrgEnv-amdwhen that environment becomes available on the LUMI-G partition. These modules also take care of the target architecture modules based on the
partitionmodule that is loaded (which offer a way to do cross-compiling for another section of LUMI than you are working on).
PrgEnv/aocc bug in 21.12
LUMI/21.12 contains a workaround for the
problems with the
aocc/3.1.0 module. Hence it is possible to use
the AOCC compilers bh working in the
LUMI/21.12 stack and using
cpeAOCC/21.12 rather than loading the