Computational Servers and Systems

Research Computing (RC) and Computing Infrastructure Services maintain a number of computational systems for academic research use. These systems are typically used for long-running, computationally intensive, and data-intensive processes, as well as processes with specialized requirements, such as a parallel computational environments supporting standard MPI libraries, GP-GPU computing, or very large memory requirements. These servers have access to a large array of research-related software for non-parallel and parallel scientific computing tasks.

Computational Server Specifications

Current RC computational servers:

  • Circe - Dell PowerEdge R720
    • 2 x Intel(R) Xeon(R) CPU E5-2665 with 16 cores @ 2.40GHz
    • 192 GB RAM
    • 10 TB local scratch (/disk/scratch/)
  • Hecate - Dell PowerEdge R720
    • 2 x Intel(R) Xeon(R) CPU E5-2690 with 16 cores @ 2.90GHz
    • 768 GB RAM
    • 10 TB local scratch (/disk/scratch/)
  • Agamede - Dell PowerEdge R730
    • 2 x Intel(R) Xeon(R) CPU E5-2695 v3 with 28 cores @ 2.30GHz
    • 256 GB RAM
  • Atlas HP GL380G6 with Window 2012 Server OS
    • 2 x Intel(R) Xeon(R) CPU E5-2650 v2 with 16 cores @ 2.60GHz
    • 48 GB RAM
    • 2 TB scratch disk (D drive)
  • Hydra HPC cluster
    • 1 head node with 2 x Intel(R) Xeon(R) CPU E5-2650 with 16 cores @ 2.00GHz, 64 GB RAM
    • 14 computational nodes each with 2 x Intel(R) Xeon(R) CPU E5-2650 with 16 cores @ 2.00GHz and 64 GB RAM
    • 26.6 TB direct-attached scratch storage
    • 10GbE private network with low latency Cisco 3548 switch
    • To view current load go to http://hydra.rc.pdx.edu/ganglia/ (on campus)

Other servers:

  • Deino test cluster (currently offline)
  • 36 node cluster with 576 cores, 128 GB RAM/node, and Omni-Path (planned)

Further Resources

Contact Research Computing to request access to RC research servers.