Linux Parallel Computing Clusters

A computational cluster is collection of computers networked together to form a single high-performance computing (HPC) system. There are a number of HPC resources available to PSU faculty, staff, students, and their collaborators who are interested in running existing applications, or developing new parallel code. Research Computing (RC) is working with PSU research faculty to increase the number of nodes available for tackling large problems and maximizing our parallel computing capability. 

RC has a computational cluster for research computing projects which require parallel processing with Message Passing Interface (MPI). Parallel computing clusters can result in a massive performance increase with a much lower hardware cost than a traditional symmetric multiprocessing system. Parallel computing is already used for purposes such as weather forecasting, phylogenetic tree building, fluid systems modelling, and molecular interaction modelling.

These systems can also be used for non-parallelized applications. Clusters can be used to run single isolated processes on a compute node, or many copies of the same process (for instance, for the purpose of statistical validation).

OIT Systems

Coeus HPC Cluster

The Coeus HPC cluster is the cornerstone of a new computational infrastructure to support the Portland Institute for Computational Science, and Research Computing at Portland State University. As the Titan god of knowledge and intellect in Greek mythology, Coeus represents the inquisitive mind and the personification of our research and educational goals.

This computational cluster is designed to address a broad range of computational requirements. Coeus is adept at handling parallel high-performance computing (HPC) workloads. It also supports “traditional” single-threaded and multithreaded processes like Matlab, R, and other standard scientific applications. Provided and built by Advanced Cluster Technologies, this cluster has an estimated peak performance of over 100 TFLOPs, Intel Omni-Path high performance networking (100 Gbps), and approximately 190 TB scratch storage.

  • Two login nodes and two management nodes head node for hosting cluster management software, system scheduler
    • Dual Intel Xeon E2630 v4, 10 cores @ 2.2 GHz
    • 64 GB 2133 MHz RAM
  • 128 compute nodes each with 20 cores and 128 GB RAM
  • 12  Intel Phi processor nodes each with 64 cores and 96 GB RAM
    •  Intel Xeon Phi 7210, 64 cores @ 2.2 GHz
    • 96 GB 2133 MHz RAM
    • 200 GB SSD drive
  • 2 large-memory compute nodes each with 20 cores and 128 GB RAM
  • Data Transfer Node to support high-bandwidth data transfers
    • Dual Intel Xeon E2650 v4, 12 cores @ 2.2 GHz
    • 256 GB 2133 MHz RAM
    • ~40 TB local disk storage in a RAID 6 array
  • Intel Omni-Path high-performance (100 Gbps) network fabric
    • 1 Gb ethernet cluster management and IPMI networks
  • Two storage servers hosting local home directories, and scratch storage volume
    • Approx. 190 TB NFS scratch storage
    • Dual Intel Xeon E2650 v4, 12 cores @ 2.2 GHz
    • 768 GB of 1866 MHz Registered ECC DDR4 memory
    • 32 x 8 TB SATA drives in a RAID 6 configuration
    • 2 TB NVME drive
    • Warm spare configuration

PICS and the Coeus cluster was made possible through support from the National Science Foundation (NSF), the Army Research Office (ARO), and Portland State University.

Hydra HPC Cluster

The Hydra HPC cluster is the former primary computational cluster. This system has a broad range of software available.

  • 14 computational nodes each with 2 x Intel(R) Xeon(R) CPU E5-2650 with 16 cores @ 2.00 GHz and 64 GB RAM
  • 1 head node with 2 x Intel(R) Xeon(R) CPU E5-2650 with 16 cores @ 2.00 GHz, 64 GB RAM
  • 28 TB direct attached scratch (temporary) storage
    • /scratch with 6.6 TB scratch storage
    • /scratch2 with 22 TB scratch storage
  • Portland Group Compiler
  • 10 GbE private network with low latency Cisco 3548 switch
  • Accessible on campus only. To view current load go to http://hydra.rc.pdx.edu/ganglia/

Other Systems

Gaia HPC Cluster

The PSU Center for Climate and Aerosol Research (CCAR) maintains the Gaia HPC cluster, which is available for research use.

  • 92 compute nodes each with 2 x Intel Xeon E5-2620 v2, 2.10 GHz, 6 core CPU and 64 GB RAM
  • DDR Infiniband Internal Networking
  • Total performance estimate 18.6 TFLOPs

For more information about using the Gaia cluster, contact CCAR.

Xsede Network
The Xsede network is a national network of high-performance computing resources. If you require access to a larger system to scale your computational jobs, time on these systems is available on an application basis. Please contact PSU Xsede Campus Champion William Garrick if you have any questions.


Further Resources

Contact Research Computing to request access or assistance.