nscc.colby.edu is our production HPC cluster.

 

The Natural Science Computer Cluster is a thirty two node, 1208 processors shared resource. Additionally, one node houses a pair of NVIDIA P100 GPUs. Its combined estimated computation throughput is 4.64 Tera flops. Each of the nodes has between 96 Gigabyte and 1.3  Terrabytes of memory with 66 Terabytes of on board high performance (NVMe & SSD) data storage and 64 terabyte of deep storage. Additionally, the machine has direct network access to another 140 terabytes of NAS for long term storage and retrieval. The cluster is based on the Redhat 7/Centos 7 Unix operating system with docker and singulariey container support. The network back plane is 40 Gigabit Ethernet on an isolated redundant pair of Juniper 4650. Node management is handled by Ganglia, Atom and Ansible, open source platforms.

It is in continuous use with availability made to Natural Science Division faculty and their research students upon request. Each of the nodes is configured with a specific need in mind, so some have much greater numbers of cpus, while other have much larger disk arrays and directly available RAM.

 

For a list of software on nscc look here.
Many applications are available including Gaussian 09, GaussView05, GROMACS, VMD, OpenMPI, and Matlab are available for computational work on this system. Blast, R, Trinity (2.0.4-2.0.8), MrBayes, Geneious, OpenMPI and various Python versions and libraries.

NSCC recieved upgrades over the summer of 2019:

  1. Upgrade head node and node 2 to Redhat 7.7
  2. Update all software compiled for Redhat 6 to redhat 7
  3. Install new secondary disk array and migrate storage to it
  4. Upgrade disks in primary diskarray from 1.7T of 1500rpm to 11.3T of SSDs
  5. Move nscc to new cabinet with sufficient power supplies with redundancy.
  6. Install new blade enclosure, configure network within unit.
  7. Install new blades, fully populating original enclosure (16) and starting on new unit (3)
  8. Move the back plane network from Colby infrastructure to our own network switch, configure existing nodes and VLANS for full functionality.
  9. Evaluate new configuration and adjust for PIs (Taylor, Noh, Angelini, Thamator, Tilden, McGrath, Peck, Maxwell and O’brien)
  10. Do all the above without major outages allowing for summer research to continue with minimal interruptions.