Research Computing has built two high performance computing clusters for research projects which require parallel processing. Many complex computational problems are more efficient when broken down into small problems running in parallel. Examples are weather forecasting and fluid systems modelling, phylogenetic tree building, and modelling molecular interactions. Properly constructed software and the right problem can result in massive performance increase over the same software running in a symmetric multiprocessing system with a much lower hardware cost.
Our parallel cluster is constructed from a number of powerful servers capable of running in parallel and communicating via a gigabit ethernet network. RC is working with PSU faculty to increase the number of nodes available for tackling large problems and maximizing our parallel computing capability.
The PSU clusters are open to students, faculty and developers that are interested in developing, testing, or using MPI enabled applications. To request access, email firstname.lastname@example.org.
Gravel Cluster - 23 Nodes / 184 Cores
- 23 Octo-Cores (2x4 Cores Intel Xeon) compute nodes - together 184 Cores
- Each compute node has 8GB of ECC SDRAM
- The headnode is a Quad Core with 4 GB of SDRAM.
- Storage space local network
- PGI compilers
For more information about available software, please visit our Software page. If you have a need for more MPI enabled software, email us at email@example.com and we will do our best to get it installed as well.
For help using research computing systems, contact Research Computing.