Skip to content



Before connecting to LUMI, you need to generate an SSH key pair and upload your public key to MyAccessID.

You can do additional setup, like adding your key to an agent or setting up a shortcut for LUMI in your SSH configuration.

Learn more about the hardware

In the first phase of the LUMI installation, LUMI-C, the CPU partition and LUMI-D, the data analytics partition are installed. LUMI-C consists of 1536 compute nodes fitted with two last generation AMD EPYC "Milan" 64-core CPUs and 256 GiB of memory.

LUMI-D consists of 12 nodes either with a large memory or NVDIA RTX GPUs as well as on node storage.

Setup your Environment

Software on LUMI can be accessed through modules. With the help of the module command, you will be able to load and unload the desired compilers, tools and libraries.

Running your Jobs

To get started with running your application on LUMI, you need to write a batch jobs script and submit it to the scheduler and resource manager. LUMI uses Slurm as the batch job system.


Please note that only the data analytics nodes have local storage, when running on the compute nodes in LUMI-C, the input and output data of your application must be stored in the scratch spaces of the parallel file systems.