DCSC logo
 
ABOUT-DCSC
DCSC/SDU
DCSC/AU
DCSC/AAU
DCSC/DTU
DCSC/KU
 
+Open all         -Close all
 
 

 

 

Horseshoe9

The Horseshoe9 cluster started production 01/07/2014 and will reach it"s end of life 01/07/2017.

Conceptual overview

  • 24 computing nodes (480 CPU cores, 64512 GPU cores, 3TB memory)
  • Gigabit ethernet and 40 Gbps QDR Infiniband interconnect
  • 1x front-end machine
  • 1x virtual Batch- and scheduling server
  • Theoretical peak performance: 44.125 TFlops DP

Hardware

Infrastructure

Front-end Server
Dell c8220 server, 2x 2,8Ghz Intel Ivy-bridge 10-core CPUs (E5-2680v2), 64 GB Ram, 200 GB SSD, Gigabit Ethernet, QDR Infiniband.
Switches
1x Dell PowerConnect 5548, 48 port gigabit switch
1x Mellanox IS5030, 36 port QDR Infiniband switch

Cluster nodes

Computing nodes
7xDell c8220 server, 2x 2,8Ghz Intel Ivy-bridge 10-core CPUs (E5-2680v2), 128 GB Ram, 200 GB SSD, Gigabit Ethernet and QDR Infiniband.

5xDell c8220 server, 2x 2,8Ghz Intel Ivy-bridge 10-core CPUs (E5-2680v2), 256 GB Ram, 200 GB SSD, Gigabit Ethernet and QDR Infiniband.

12xDell c8220x server, 2x 2,8Ghz Intel Ivy-bridge 10-core CPUs (E5-2680v2), 2x Nvidia K20x GPU's, 64 GB Ram, 200 GB SSD, Gigabit Ethernet and QDR Infiniband.

Express node
1xDell c8220x server, 2x 2,8Ghz Intel Ivy-bridge 10-core CPUs (E5-2680v2), 1x Nvidia K20x GPU's, 64 GB Ram, 200 GB SSD, Gigabit Ethernet and QDR Infiniband.

Software

Operating system on nodes and frontend
CentOS 6.5 on a 2.6.32 kernel
OFED 3.5
Batch- and scheduling software
CentOS 6.5 on a 2.6.32 kernel
SLURM 14.03
Compilers
GCC 4.4.7 (default), 4.9 available via module
Intel C++ and Fortran v. 12.0.2
Applications and Libraries
OpenMPI 1.6.4
Intel MKL 10.3.2
AMD ACML 5.2.0