Singularity/Apptainer¶
We support Singularity/Apptainer containers as an alternative way to bring your scientific application to LUMI instead of installing it using EasyBuild or Spack.
If you are familiar with Docker containers, Singularity/Apptainer containers are essentially the same thing, but are better suited for multi-user HPC systems such as LUMI. The main benefit of using a container is that it provides an isolated software environment for each application, which makes it easier to install and manage complex applications.
This page provides guidance on preparing your Singularity/Apptainer containers for use with LUMI. Please consult the container jobs page for guidance on running your container on LUMI.
Note
There are two major providers of the singularity
runtime, namely
Singularity CE and Apptainer, with the latter
being a fork of the former. For most cases, these should be fully compatible.
LUMI provides a Singularity CE runtime.
Pulling container images from a registry¶
Singularity allows pulling existing container images (Singularity or Docker)
from container registries such as DockerHub or AMD Infinity
Hub. Pulling container images from registries can be done on
LUMI. For instance, the Ubuntu image ubuntu:22.04
can be pulled from
DockerHub with the following command:
This will create the Singularity image file ubuntu_22.04.sif
in the directory
where the command was run. Once the image has been pulled, the container can be
run. Instructions for running the container may be found on the container jobs
page.
Take care when pulling container images
Please take care to only use images uploaded from reputable sources as these images can easily be a source of security vulnerabilities or even contain malicious code.
Set cache directories when using Docker containers
When pulling or building from Docker containers using singularity
, the
conversion can be quite heavy. Speed up the conversion and avoid leaving
behind temporary files by using the in-memory filesystem on /tmp
as the
Singularity cache directory, i.e.
Building Apptainer/Singularity SIF containers¶
Building your own container on LUMI is, unfortunately, not in general possible.
The singularity build
command, in general, requires some level of root
privileges, e.g. sudo
or fakeroot
, which are disabled on LUMI for security
reasons. Thus, to build your own Singularity/Apptainer container for
LUMI, you have two options:
- Use the cotainr tool to build containers on LUMI (only for certain use cases).
- Build your own container on your local hardware, e.g. your laptop.
Building containers using the cotainr tool¶
Cotainr is a tool that makes it easy to build Singularity/Apptainer containers on LUMI for certain use cases. It is not a general purpose container building tool.
On LUMI, cotainr
is available in the
Cray Programming Environment and may be loaded using
When building containers using cotainr build
, you may either specify a base
image for the container yourself (using the --base-image
option) or you may
use the --system
option to use the recommended base images for LUMI. To list
the available systems, run
$ cotainr info
...
System info
-------------------------------------------------------------------------------
Available system configurations:
- lumi-g
- lumi-c
As an example, you may then use cotainr build
to create a container for
LUMI-G containing a Conda/pip environment by running
where my_conda_env.yml
is a file containing an exported Conda
environment. The resulting my_container.sif
container may be run
like any other container job on LUMI. For example:
$ srun --partition=<partition> --account=<account> singularity exec my_container.sif python3 my_script.py
where my_script.py
is your Python script.
The installed Conda environment is automatically activated when you run the container. See the cotainr Conda environment docs and the cotainr LUMI examples for more details.
Make sure your Conda environment supports the hardware in LUMI
To take advantage of e.g. the GPUs in LUMI-G, the
packages you specify in your Conda environment must be compatible with
LUMI-G, i.e. built against ROCm. Similarly, to take full advantage
of the Slingshot 11 interconnect when running MPI jobs, you
must make sure your packages are built against Cray MPICH. Cotainr does
not do any magic conversion of the packages specified in the Conda
environment to make sure they fit the hardware of LUMI. It simply installs
the packages exactly as listed in the my_conda_env.yml
file.
Note
Using cotainr
to build a container from a Conda/pip environment is
different from wrapping a Conda/pip environment using the LUMI container
wrapper. Each serves their own purpose. See
the Python installation guide for an overview of
differences and this GitHub issue for a detailed
discussion of the differences.
See the cotainr documentation for more details about cotainr
.
Building containers on local hardware¶
You may also build a Singularity/Apptainer container for LUMI on your local hardware and transfer it to LUMI.
As an example, consider building a container that is compatible with the MPI stack on LUMI.
Warning
For MPI-enabled containers, the application inside the container must be dynamically linked to an MPI version that is ABI-compatible with the host MPI.
The following Singularity definition file
mpi_osu.def
, installs MPICH-3.1.4, which is ABI-compatible with the
Cray-MPICH found on LUMI. That MPICH will be used to compile the OSU
micro-benchmarks. Finally, the OSU point to point bandwidth test
is set as the "runscript" of the image.
Bootstrap: docker
From: ubuntu:24.04
%post
#version values
#3.4a2 on LUMI
VERSION=3.4.3
BENCHMARK=7.4
# Install software
apt-get update
apt-get install -y file g++ gcc gfortran make gdb strace wget ca-certificates --no-install-recommends
# Install mpich
wget -q http://www.mpich.org/static/downloads/$VERSION/mpich-$VERSION.tar.gz
tar xf mpich-$VERSION.tar.gz
cd mpich-$VERSION
./configure --disable-fortran --enable-fast=all,O3 --prefix=/usr --with-device=ch3
make -j$(nproc)
make install
ldconfig
# Build osu benchmarks
wget -q http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-$BENCHMARK.tar.gz
tar xf osu-micro-benchmarks-$BENCHMARK.tar.gz
cd osu-micro-benchmarks-$BENCHMARK
./configure --prefix=/usr/local CC=$(which mpicc) CFLAGS=-O3 --build=aarch64-unknown-linux-gnu
make
make install
cd ..
rm -rf osu-micro-benchmarks-$BENCHMARK
rm osu-micro-benchmarks-$BENCHMARK.tar.gz
%runscript
/usr/local/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw
The image can be built on your local hardware (not on LUMI) with
The mpi_osu.sif
file must then be transferred to LUMI. See
the container jobs MPI documentation
page
for instructions on running this MPI container on LUMI.