HLRN

no/neinStudents yes/jaEmployees yes/jaFaculties no/neinStud. Unions

hlrn4

For very computationally and memory intensive computations, members of Kiel University can use the HLRN Supercomputing System in the scope of the North-German Supercomputing Alliance. The HLRN Supercomputing Alliance was formed in 2001 and is a joint project of the seven North-German states Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Vorpommern, Niedersachsen and Schleswig-Hostein.

The HLRN alliance jointly operates a distributed supercomputer system hosted at the sites Georg-August-Universität Göttingen und Zuse-Institut Berlin (ZIB). In September 2018 the HLRN-IV system phase 1 from the company Atos/Bull was put into operation. After the successful installation of phase 2 the total HLRN-IV system will hold more than 200,000 cores with a total peak performance of about 16 PFlop/s.


 

REMARK: For the currently available resources at the HLRN please see the web pages of HLRN.
For events and courses, see News Center.
Link to documentation.

 

Lise at ZIB

The HLRN complex in Berlin at ZIB is named after Lise Meitner and it contains 1146 compute nodes with 110,016 compute cores.

hlrn4-lise

HLRN-IV system Lise in Berlin, photo: ITMZ | University Rostock.

  • 1146 compute nodes
    • 1112 nodes with 384 GB memory (standard node)
    • 32 nodes with 768 GB memory (large node)
    • 2 nodes with 1.5 TB memory (huge node)
    • per node 2 CPUs + 1 Intel Omni-Path host fabric adapter
    • per CPU 1 Intel Cascade Lake Platinum 9242 (CLX-AP), 48 cores
  • Omni-Path interconnect configuration
    • 2 x 1162 port OPA100 director switches
    • 54 x 48 port edge switches
    • fat tree topology
    • 14 TB/s bisection bandwidth
    • 1.65 μs maximum latency
  • 9 login nodes
    • 1 node = 2 CPU + 384 GB memory
    • 1 CPU = Intel Cascade Lake Silver 4210 (CLX), 20 nodes

 

Emmy in Göttingen

At the site Göttingen University the HLRN system phase 1 named Emmy (for Emmy Noether) is operated since October 2018.

hlrn4-emmy

The compute nodes of HLRN-IV phase 1 in Göttingen.

  • 448 compute nodes
    • 432 nodes with 192 GB memory
    • 16 nodes with 768 GB memory
    • per node 2 CPUs and 1 Intel Omni-Path host fabric adapter
    • per CPU 1 Intel Skylake Gold 6148, 20 cores
    • 1 x 480 GB SSD hard disk
  • 1 GPU node
    • 2 x Intel Skylake Gold 6148 CPUs (40 cores per nodes)
    • 192 GB Memory
    • 1 x 480 GB SSD hard disk
    • 1 x Intel Omni-Path host fabric adapter
    • 4 x NVIDIA Tesla V100 32GB

 


Contact persons at Kiel University

If you are interested in using the HLRN, the following employees of the Computing Centre will provide you with further information: