nscc.colby.edu is our production HPC cluster.
TThe Natural Science Computer Cluster is an twenty seven node, 714 cpu shared resource. Its combined estimated computation throughput is 2.93 Tera flops. Each of the nodes has between 96 and 160 Gigabytes of memory with 66 terabytes of on board high performance (NVMe & SSD) data storage and 64 terabyte of deep storage. Additionally, the machine has direct network access to another 140 terabytes of NAS for long term storage and retrieval. The cluster is based on the Redhat 7/Centos 7 Unix operating system with docker containers. The network back plane is 40 Gigabit Ethernet on an isolated Juniper QFX5200. Node management is handled by Ganglia, Atom and Ansible, open source platforms.
It is in continuous use with availability made to Natural Science Division faculty and their research students upon request. Each of the nodes is configured with a specific need in mind, so some have much greater numbers of cpus, while other have much larger disk arrays and directly available RAM.
For a list of software on nscc look here.
Many applications are available including Gaussian 09, GaussView05, GROMACS, VMD, OpenMPI, and Matlab are available for computational work on this system. Blast, R, Trinity (2.0.4-2.0.8), MrBayes, Geneious, OpenMPI and various Python libraries.
NSCC recieved upgrades over the summer of 2016:
- Upgrade head node and node 2 to Redhat 7.4
- Update all software compiled for Redhat 6 to redhat 7
- Install new secondary disk array and migrate storage to it
- Upgrade disks in primary diskarray from 1.7T of 1500rpm to 11.3T of SSDs
- Move nscc to new cabinet with sufficient power supplies.
- Install new blade enclosure, configure network within unit.
- Install new blades, fully populating original enclosure (16) and starting on new unit (3)
- Move the back plane network from Colby infrastructure to our own network switch, configure existing nodes and VLANS for full functionality.
- Evaluate new configuration and adjust for PIs (Taylor, Noh, Angelini, Thamator, Tilden, Krumm, Peck, Maxwell and O’brien)
- Do all the above without major outages allowing for summer research to continue with minimal interruptions.