New nodes available & more!

News! More nodes will be available for HPC calculations from early 2025.

  • Node pez047 (previously ANAV partition and now R630-v4 partition) is opened.
  • Nodes pez048 and pez049 (HM and HM-dev partitions) are fully opened.
  • Nodes pez051 (R182 and R182-open partitions) and pez052 (R640 partitions) are opened.
  • Node pez053 (R6525 partitions) is added.

A stricter policy will be applied regarding job extensions in order to democratize access to Acuario and allow jobs to be submitted more often.
From now on, job extensions are only allowed for up to 5 extra days. In most partitions, the limit is set to 10 days by default. In the event that more calculation time is needed, a request must be made to higher instances.

New node pez053 counts with 256 CPUs, to democratize its use a set of restriccions are applied to its partitions. As a difference to the rest of the cluster, to send jobs to the new partition R6525, now you will need to use this flag:

#!/bin/bash
# Script.sh file
#SBATCH --qos=cpu-limit32

Or this other option:

$ sbatch script.sh --qos=cpu-limit32

This will limit the CPUs permitted to a maximum of 32 CPUs per job. Apart from this extra flag, make sure you don’t overpass this limit or your job won’t run.

As you all can see, the HPC website is renewed for a more modern look and more information has been added about the computing resources available in Acuario. Check Computing resources page to see new nodes resources.

If you need any help, feel free to ask via https://tickets.cimne.upc.edu/

Important modules updates

Hi all,

We are updating all cluster’s modules and planning to remove some of them soon. These are the new installed modules:

  • GCC versions 6.5.0, 7.3.0 and 8.2.0.
  • Python versions 2.7.15, 3.6.7 and 3.7.1. NumPy and SciPy are in 3.6.7.
  • Boost libraries version 1.68.0.
  • CMake version 3.13.0.

The following modules will be deleted next 12/12/18:

  • GCC versions 5.3.0, 5.4.0, 6.1.0, 6.3.0, 6.4.0, and 7.1.0. We recommend you to use version 6.5.0.
  • Python versions 3.5.1 and 3.6.1.
  • Boost libraries version 1.61.0-b1.
  • CMake versions 3.5.2 and 3.8.2.

If you need help to make the changes or need some specific version of a module feel free to ask for it via https://tickets.cimne.upc.edu.

New node pez048 & B510 update

A new node called pez048 has been added to the cluster. Its characteristics has been selected in order to boost shared memory jobs. This node has 2 CPU AMD EPYC 7451, each one with 24 cores (48 threads). Unlike the rest of the nodes, threads are enabled due to every thread owns a FPU. Also, at tests with enabled threads have been proved that there are no bottlenecks. Moreover, the node has an amount of 512 GB of RAM DDR4 at 2.6 GHz.

In order to use it, a new partition called HM has been created. Also, its corresponding development partition (HM-dev) has been created.

If you need to know additional specs about this node feel free to ask for it.

On the other side, B510 partition has been updated with the addition of 4 nodes (pez0[17-18] & pez0[33-34]).

Due to hardware failures, we have removed computing nodes pez[017-032] from the resource manager. We are currently checking these machines to try to add them back to our resource group.

On the other hand, we have removed the restrictions to calculate on computing nodes pez[036-043], adding them to HighParallelization partition. Now there are no job deaths due to resources or access restrictions.

You can check the current configuration partition at https://hpc.cimne.upc.edu/getting-started/#h2-understanding-the-resource-manager-slurm

Science & Computation