OS HPC Cluster
Supported By Fluid Numerics and You
Getting Started
Get Onboarded
Once you have an account on the OS HPC Cluster, it's time to get onboarded! Use your fluidnumerics.cloud account to manage your SSH keys and other account information.
Load Software
The OS HPC Cluster comes with a number of compilers, MPI flavors, and HPC packages. Learn how to get started with lmod environment modules or Docker & Singularity on our cluster!
Submit Jobs
The OS HPC Cluster uses the Slurm Workload Manager to schedule workloads. Learn how to run serial, parallel, and GPU accelerated applications using interactive and batch workflows!
Cluster Overview
The OS HPC cluster is a managed HPC cluster, provided by Fluid Numerics and operated on Google Cloud Platform. Fluid Numerics manages this cluster for the OS Hackathon community to support collaborative coding events. This resource provides you the opportunity to experiment with modern HPC hardware to gauge the viability or new hardware and software modalities.
If you come to an OS Hackathon event, you will be given no-cost, time-limited access to the cluster and system administration support. If you need additional time on the cluster or want to experiment independent of a hackathon, you can become a OS HPC Cluster Corporate Member. Learn more.
Compute Partitions
The compute partitions listed below reflects the current system configuration. With increased community usage, we are able to further justify increases in the partition sizes and in the variety of partitions made available. All system users have access to the following partitions:
(50 node) v100-gpu - standard-8 ( 8 vCPU + 30 GB RAM; Intel® Broadwell/Haswell ) + 1 Nvidia® Tesla® V100 GPUs
(60 node) p100-gpu - standard-8 ( 8 vCPU + 30 GB RAM; Intel® Broadwell/Haswell ) + 1 Nvidia® Tesla® P100 GPUs
(2 node) n2d-standard-224 ( 224 vCPU + 896 GB RAM; AMD® Epyc Rome )
(5 node) c2-standard-60 - ( 60 vCPU + 240 GB RAM; Intel® Cascade Lake )
(10 node) n1-standard-4 - ( 4 vCPU + 15 GB RAM; Intel® Broadwell/Haswell )
Software
The Software listed below reflects the current system configuration. Please request any additional software dependencies that are needed on the cluster. All system users have access to the following software packages:
Operating System : CentOS 7
Slurm 20.02 (Scheduler & Workload manager)
GCC 10.2.0 + OpenMPI 4.0.5
CUDA Toolkit 11.1
CUDNN 8
ROCm™ (4.1.1)
HIP/HIPFort
AOMP Compiler (OpenMPI 5.0 GPU Offloading)
OpenCL
Hipify-clang
Focal (OpenCL for Fortran)
Paraview
OpenFOAM