OS HPC Cluster
Supported By Fluid Numerics and You
Once you have an account on the OS HPC Cluster, it's time to get onboarded! Use your fluidnumerics.cloud account to manage your SSH keys and other account information.
The OS HPC Cluster comes with a number of compilers, MPI flavors, and HPC packages. Learn how to get started with lmod environment modules or Docker & Singularity on our cluster!
The OS HPC Cluster uses the Slurm Workload Manager to schedule workloads. Learn how to run serial, parallel, and GPU accelerated applications using interactive and batch workflows!
The OS HPC cluster is a managed HPC cluster, provided by Fluid Numerics and operated on Google Cloud Platform. Fluid Numerics manages this cluster for the OS Hackathon community to support collaborative coding events. This resource provides you the opportunity to experiment with modern HPC hardware to gauge the viability or new hardware and software modalities.
If you come to an OS Hackathon event, you will be given no-cost, time-limited access to the cluster and system administration support. If you need additional time on the cluster or want to experiment independent of a hackathon, you can become a OS HPC Cluster Corporate Member. Learn more.
The compute partitions listed below reflects the current system configuration. With increased community usage, we are able to further justify increases in the partition sizes and in the variety of partitions made available. All system users have access to the following partitions:
(50 node) v100-gpu - standard-8 ( 8 vCPU + 30 GB RAM; Intel® Broadwell/Haswell ) + 1 Nvidia® Tesla® V100 GPUs
(60 node) p100-gpu - standard-8 ( 8 vCPU + 30 GB RAM; Intel® Broadwell/Haswell ) + 1 Nvidia® Tesla® P100 GPUs
(2 node) n2d-standard-224 ( 224 vCPU + 896 GB RAM; AMD® Epyc Rome )
(5 node) c2-standard-60 - ( 60 vCPU + 240 GB RAM; Intel® Cascade Lake )
(10 node) n1-standard-4 - ( 4 vCPU + 15 GB RAM; Intel® Broadwell/Haswell )
The Software listed below reflects the current system configuration. Please request any additional software dependencies that are needed on the cluster. All system users have access to the following software packages: