General Cluster Information
The Portland State University Rocks and Gravel clusters are configured to allow users to run MPI enabled software and applications. Researchers seeking resources for running MPI enabled jobs and requiring higher hardware resources (number of CPU's, memory, storage, etc.) can use the resources allocated by a PSU cluster. However, we also expect that the interested person, will only use the cluster for MPI jobs so as to use all of the provided resources.
Message Passing Interface
The high-performance compute clusters offered by PSU utilize Message Passing Interface (MPI) libraries to allow communication between processes on different computers. To utilize these capabilities, software must be written to use the MPI libraries.
From a more general perspective, the clusters can be used to improve software performance due to the use of parallelism. This on one hand limits the type of problems that can be optimized because such problems must be separable into parallel computing tasks. On the other hand parallelism gives advantages such as scalability and massive improvements of results depending on how the resources are allocated for the computation.
The difference of performance between parallel MPI and standard serial job is proportional to:
- how well the problem scales
- how parallelizable the problem is
- how well the resources are assigned and available when required
When deciding whether or not you are in need of MPI based computing, there are several problems and details to be considered. Once these are solved it is then easy to evaluate wether or not you need power provided by high-performance computing. It is important to know if the problem you are trying to solve would benefit from being parallelized. In general problems that are similar to your problem that have already been implemented in MPI fashion allow one to tell if the desired problem can be succesfully parallelized. It should be pointed out that MPI implementation of a program requires particular code and thus, only dedicated source code can be compiled using MPI libraries.
The utility of running MPI enabled applications depends also on the problem data. Some data sets cannot be optimized or more optimally used when being processed in parallel. Some problems require a high number of small computational threads, while others needs only a set of high performance computational threads. Both of these problems are valid HPC descriptions, however MPI allows better performance increase than the non specific thread migration.
Contact Academic & Research Computing for additional assistance.