HLRN

no/neinStudents yes/jaEmployees yes/jaFaculties no/neinStud. Unions

hlrn4

For very computationally and memory intensive computations, members of Kiel University can use the HLRN Supercomputing System in the scope of the North-German Supercomputing Alliance. The HLRN Supercomputing Alliance was formed in 2001 and is a joint project of the seven North-German states Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Vorpommern, Niedersachsen and Schleswig-Hostein.

The HLRN alliance jointly operates a distributed supercomputer system hosted at the sites Georg-August-Universität Göttingen und Zuse-Institut Berlin (ZIB). In September 2018 the HLRN-IV system phase 1 from the company Atos/Bull was put into operation. After the successful installation of phase 2 the total HLRN-IV system holds more than 200,000 cores with a total peak performance of about 16 PFlop/s.


 

REMARK: For the currently available resources at the HLRN please see the web pages of HLRN.
For events and courses, see HLRN News Center.
For events, courses and workschops of the NHR Alliance (national high performance computing), see here.

 

Lise at ZIB

The HLRN complex in Berlin at ZIB is named after Lise Meitner and it conatins 1270 compute nodes with 121,920 compute cores.

hlrn4-lise

HLRN-IV system Lise in Berlin, photo: ITMZ | University Rostock.

  • 1270 compute nodes
    • 1236 nodes with 384 GB memory (standard node)
    • 32 nodes with 768 GB memory (large node)
    • 2 nodes with 1.5 TB memory (huge node)
    • per node 2 CPUs + 1 Intel Omni-Path host fabric adapter
    • per CPU 1 Intel Cascade Lake Platinum 9242 (CLX-AP), 48 cores
  • Omni-Path interconnect configuration
    • 2 x 1162 port OPA100 director switches
    • 56 x 48 port edge switches
    • fat tree topology
    • 14 TB/s bisection bandwidth
    • 1.65 μs maximum latency
  • 8 login nodes
    • per node 2 CPU with 384 GB memory
    • per CPU 1 Intel Cascade Lake Silver 4210 (CLX), 20 nodes

 

Emmy in Göttingen

At the site Göttingen University the HLRN system phase 1 named Emmy (for Emmy Noether) is in operation since October 2018. In October 2020 the second phase was added.

hlrn4-emmy

The compute nodes of HLRN-IV phase 1 in Göttingen.

  • 974 nodes added in phase 2
    • 956 nodes with 384 GB memory (standard node)
    • 16 nodes with 768 GB memory (large node)
    • 2 nodes with 1,5 TB memory (huge node)
    • per nodes 2 CPUs and 1 Intel Omni-Path host fabric adapter
    • per CPU 1 Intel Cascade Lake Platinum 9242 (CLX-AP), 48 cores
  • 448 compute nodes of phase 1
    • 432 nodes with 192 GB memory (medium node)
    • 16 nodes with 768 GB memory (large node)
    • per node 2 CPUs and 1 Intel Omni-Path host fabric adapter
    • per CPU 1 Intel Skylake Gold 6148, 20 cores
    • 1 x 480 GB SSD hard disk
  • 1 GPU node
    • 2 x Intel Skylake Gold 6148 CPUs (40 cores per nodes)
    • 192 GB Memory
    • 1 x 480 GB SSD hard disk
    • 1 x Intel Omni-Path host fabric adapter
    • 4 x NVIDIA Tesla V100 32GB

 


Contact persons at Kiel University

If you are interested in using the HLRN, the following employees of the Computing Centre will provide you with further information: