DCSC logo
 
ABOUT-DCSC
DCSC/SDU
DCSC/AU
DCSC/AAU
DCSC/DTU
DCSC/KU
 
+Open all         -Close all
 
 

Horseshoe6

The Horseshoe6 cluster was ready for production in April 2010 and reached it's end of life in December 2014.

Conceptual overview

  • 264 computing nodes (2104 CPU cores, 168 TB user disk space)
  • 40 GBps QDR Infiniband interconnect, 2:1 oversubscribed
  • Gigabit ethernet
  • 1x front-end machine
  • 1x virtual Batch- and scheduling server
  • Theoretical peak performance: 44.7 TFlops

Hardware

Infrastructure

Front-end Server
IBM iDataplex dx360, 2x 2,66Ghz Intel Nehalem-EP 4-core CPUs (X5550), 24 GB Ram, 1x500 GB HDD (7200rpm, 8 MB buffer, SATA300), 2x Gigabit Ethernet, 1xQDR infiniband
Switches
8 Blade RackSwitch G8000 gigabit switches
11 QLogic 12200 Infiniband edge switchs with 36 40GBps ports each
4 QLogic 12200 Infiniband core switchs with 36 40GBps ports each

Cluster nodes

Computing nodes
240x IBM iDataplex dx360 servers with 2 2,66Ghz Intel Nehalem-EP CPUs (X5550), 24 GB Ram, 500 GB HDD (7200rpm, 8 MB buffer, SATA300), 2x Gigabit Ethernet, Qlogic QLE7340 QDR Infinibandx4

24x IBM iDataplex dx360 servers with 2 2,66Ghz Intel Nehalem-EP CPUs (X5550), 48 GB Ram, 2 x 1 TB HDD (7200rpm, 8 MB buffer, SATA300), 2x Gigabit Ethernet, Qlogic QLE7340 QDR Infinibandx4

Software

Operating system on nodes and frontend
CentOS 5.4 on a 2.6.18 kernel
QLogic OFED 1.4.2
Operating system on infrastructure server
CentOS 5.4 on a 2.6.18 kernel
Batch- and scheduling software
Torque 2.4.4
MAUI 3.2.6
Compilers
GCC 4.1.2 and 4.4.0
Intel C++ and Fortran v. 11.1
Applications and Libraries
OpenMPI 1.3.2
Intel MKL v. 10.2