Skip to main content
Close menu William & Mary

RC/HPC available resources

RC Compute Resources

Main-campus HPC

VIMS-campus cluster

Main-campus HPC (SciClone)

The main-campus HPC resources (collectively known as "sciclone") consist of a series a different sub-clusters, each of which has a set of hardware resources tied to it.   There are two types of HPC resources in general, those used for multi-node parallel calculations and those best suited for GPU/CPU serial, shared-memory applications within one or, at most, a few nodes.  All clusters have a maximum walltime of 72 hrs for jobs, excluding kuro which has a maximum of 48 hrs.

In the tables below, the front-end/login server for the cluster is listed first, followed by the nodes available from the cluster.

Multi-node parallel / MPI clusters

name processor

cores/

node

total #

cores

mem (GB) network deployed notes

kuro 

kuro.sciclone.wm.edu 2x AMD EPYC 9124 32 32 384 10GbE/HDR IB 2024 The kuro cluster is currently reserved from 3pm Friday-8pm Sundays
ku01-ku47 2x AMD EPYC 9334 64 3008 384 1GbE/HDR IB 2024

femto

femto.sciclone.wm.edu 2x Intel Xeon Gold 6130 32 32 96 1GbE/EDR IB 2019
fm01-fm30 2x Intel Xeon Gold 6130 32 960 96 1 GbE/EDR IB 2019

bora

bora.sciclone.wm.edu 2x Intel Xeon E5-2640 20 20 64 10 GbE/FDR IB 2017
bo01-bo55 2x Intel Xeon E5-2640 20 1100 128 10 GbE/FDR IB 2017

Serial / Shared memory clusters / GPU resources

name GPUs

processor

cores/

node

total # cores mem (GB) network deployed notes

hima

bora.sciclone.wm.edu --- 2x Intel Xeon E5-2640 20 20 64 10GbE/FDR IB 2017 The hima cluster uses bora for compiling and job control
hi01-hi07

2x NVIDIA P100 (16GB)

1x NVIDIA V100 (16GB)

2x Intel Xeon E5-2683 32 224 256 1GbE/FDR IB 2017

astral

astral.sciclone.wm.edu --- 2x Intel Xeon Gold 6336Y 48 48 256 1GbE/EDR IB 2022
as01 8x NVIDIA A30 (24GB) 2x Intel Xeon Platinum 8362 64 64 512 1 GbE/EDR IB 2022

gust

gust.sciclone.wm.edu --- 2x AMD EPYC 7302 32 32 32 10 GbE/EDR IB 2020
gt01-gt02 --- 2x AMD EPYC 7702 128 256 512 1 GbE/EDR IB 2020

gulf

gulf.sciclone.wm.edu --- AMD EPYC 7313P 32 32 128 10 GbE/HDR IB 2024
gu01-gu02 --- AMD EPYC 7313P 16 32 512 1 GbE/HDR IB 2024
gu03-gu06

2x NVIDIA A40 (48GB)

per node

2x AMD EPYC 7313P 32 128 128 10 GbE/HDR IB 2024

 

Main-campus Kubernetes cluster

name GPUs processor # cores memory(GB) network deployed notes
cm.geo.sciclone.wm.edu -- Intel Xeon Silver 4110 16 96 1 GbE 2019 Front end for k8s cluster
ts4 8x RTX 6000 24GB AMD EPYC 7502 64 512 1 GbE / HDR IB 2021
dss 12x T4 16GB Intel Xeon Gold 5218 32 384 1 GbE / HDR IB 2022
gu07-gu19 2x A40 48GB 2x AMD EPYC 7313 32 128 1 GbE / HDR IB 2024 There are currently 13 guXX nodes (26 A40 GPUs total)
cdsw00,m1a,m1b -- Intel Xeon Silver 4110 16 192 1 GbE 2019 3 CPU only nodes
m2 -- Intel Xeon Silver 4110 16 96 1 GbE 2019 1 GPU only node
w01-w07 -- Intel Xeon Gold 6130 32 192 1 GbE 2019 7 w0X CPU only nodes
The following nodes are often encumbered with projects by individual research groups, please contact the chair of Data Science for inquiring about access
name GPUs processor # cores memory(GB) network deployed
grace -- Intel Xeon Gold 6336Y 48 1024 1GbE 2024
d3i01,d3i02 4x L4 24GB AMD EPYC 7313 32 512 1GbE 2024 2 nodes with 4x L4 GPUs each
jdserver1 8x L40s 48GB Intel Xeon Gold 5320 52 512 1GbE 2024
jdserver2 4x H100 96GB Intel Xeon Gold 5418Y 48 1024 1GbE 2024
brewster GH200 96GB Arm Neoverse V2 72 480 1GbE 2025

 

VIMS-campus cluster (Chesapeake)

The James cluster has a maximum walltime of 72 hrs for jobs.  Potomac and Pamunkey allow 120 hrs.

Multi-node parallel / MPI cluster

name processor

cores/

node

total # cores mem (GB) network deployed notes

james

james.hpc.vims.edu Intel Xeon Silver 4112 8 8 32 10GbE/EDR IB 2018 Restrictions
jm01-jm27 Intel Xeon Silver 4114 20 540 64 1GbE/EDR IB 2018

Serial / Shared memory clusters

name processor

cores/

node

total # cores mem GB) network deployed notes

potomac

chesapeake.hpc.vims.edu AMD Opteron 4238 8 8 32 10GbE/EDR IB 2014
pt01-pt30 AMD Opteron 4334 12 360 64 1GbE/EDR IB 2014
pamunkey
james.hpc.vims.edu Intel Xeon Silver 4112 8 8 32 10GbE/EDR IB 2018

The pamunkey cluster uses james for compiling and job control. 

The pamunkey nodes do not have infiniband

pm01-pm02 AMD Opteron 6380 64 128 256 10GbE 2016

Please send an email to [[w|hpc-help]] for advice if you are unsure of the most appropriate type of node for your particular application.