All posts by Alberto Burgos Plaza

Important modules updates

Hi all,

We are updating all cluster’s modules and planning to remove some of them soon. These are the new installed modules:

  • GCC versions 6.5.0, 7.3.0 and 8.2.0.
  • Python versions 2.7.15, 3.6.7 and 3.7.1. NumPy and SciPy are in 3.6.7.
  • Boost libraries version 1.68.0.
  • CMake version 3.13.0.

The following modules will be deleted next 12/12/18:

  • GCC versions 5.3.0, 5.4.0, 6.1.0, 6.3.0, 6.4.0, and 7.1.0. We recommend you to use version 6.5.0.
  • Python versions 3.5.1 and 3.6.1.
  • Boost libraries version 1.61.0-b1.
  • CMake versions 3.5.2 and 3.8.2.

If you need help to make the changes or need some specific version of a module feel free to ask for it via https://tickets.cimne.upc.edu.

New node pez048 & B510 update

A new node called pez048 has been added to the cluster. Its characteristics has been selected in order to boost shared memory jobs. This node has 2 CPU AMD EPYC 7451, each one with 24 cores (48 threads). Unlike the rest of the nodes, threads are enabled due to every thread owns a FPU. Also, at tests with enabled threads have been proved that there are no bottlenecks. Moreover, the node has an amount of 512 GB of RAM DDR4 at 2.6 GHz.

In order to use it, a new partition called HM has been created. Also, its corresponding development partition (HM-dev) has been created.

If you need to know additional specs about this node feel free to ask for it.

On the other side, B510 partition has been updated with the addition of 4 nodes (pez0[17-18] & pez0[33-34]).

Due to hardware failures, we have removed computing nodes pez[017-032] from the resource manager. We are currently checking these machines to try to add them back to our resource group.

On the other hand, we have removed the restrictions to calculate on computing nodes pez[036-043], adding them to HighParallelization partition. Now there are no job deaths due to resources or access restrictions.

You can check the current configuration partition at https://hpc.cimne.upc.edu/getting-started/#h2-understanding-the-resource-manager-slurm