COLLEGE OF SCIENCE,
ENGINEERING AND
TECHNOLOGY

High Performance Computing Lab

DEPARTMENT OF PHYSICS

TECH 138

The High Performance Computing Center (HPCC) and Laboratory serves two important student communities. Computer Science majors learn how to assemble, set up, run and administer “state-of-the-art” computer equipment. Students within the core STEM disciplines such as physics, engineering, mathematics, and chemistry learn how to write parallel programming codes able to tackle challenging problems requiring extensive computational analysis, simulations, and visualizations (including 3D) critical to basic physics research (i.e. many body physics, materials research, bio-physics, quantum field theory, fluid dynamics, cosmology, etc.) and applied research (i.e. geophysics, seismology, meteorology, engineering, tomography, etc.). The HPCC is hosted in a 672 square feet data center room; and powered by a 205 Volt circuit with a total deliverable power of 88 kWatts. It is cooled by three 20-ton AC units and it has a Sapphire fire protection system.

The HPCC cluster has 54 compute nodes with dual quad-core and 12-way cores Intel Xeon hyper-threaded processors, amassing a total of 944 cores, capable of running 944 independent computing threads. The total RAM memory of the cluster is 892 GB and the total storage capacity is 30 TB, which includes local hard drive space and three NAS storage units. The cluster is accessible through a front node and a secondary front end. All nodes communicate through a 1Gb management network and also a high-speed, low-latency cupper 10 Gb production network.

Management network is served by 3 Cisco switches, while the production network is served by 2 high performance Arista switches. The cluster is administered centrally by the cluster management software Rocks and is instrumented by a variety of software stacks, compilers (C, Fortran, C++), communication libraries (MPI, OpenMP, Condor), and domain specific software like R, Mathematica, Matlab, Comsol and MrBayes. The peak performance of the cluster, as measured by Linpack benchmarks, is 1.75 TFlops. The cluster will be extended in the Spring of 2014 with 8 new “super-node” units based on an architecture that includes 4 Intel Xeon processors and 4 Intel Xeon Phi Co-processors, with the goal of maximizing the computing density for each node and reducing the power consumption per flop. The theoretical performance for each new node is estimated to be 4 TFlops.

Initial funding for the HPCC is made possible through awards from the National Science Foundation and the Department of Defense. Both student communities are positively impacted in their respective areas with regards to employment prospects requiring High Performance Computing experience. Computer Science majors learn UNIX operating system and basic system administration skills, including managing SQL databases. These skills are transferable and in demand in the workforce market. Within the greater Houston metropolitan area, energy companies place high demands
on students with HPC skills and experience.