Close menu Resources for... William & Mary
W&M menu close William & Mary

Node Types and Subclusters

SciClone and Chesapeake are clusters of many individual computers ("nodes"), so before you can run a job on SciClone or Chesapeake, you must decide on which particular member computers your job will run (often one or more groups of similar computers, called subclusters). The table below is intended to assist you with this decision, after which you can learn how to submit jobs using the Torque or Slurm batch system.

Please contact the HPC group for advice if you are unsure of the most appropriate type of node for your particular application.

Main-campus/Sciclone cluster 
  • kuro - MPI parallel jobs requiring at least 64 cores (Slurm)
  • astral - shared memory GPU cluster w/ 8x Nvidia A30 GPUs, 24GB (Slurm)
  • vortex - Begginning cluster - first try here if you are new to HPC  (Torque
  • bora - Main MPI/parallel cluster (Torque)
  • hima - Main shared memory cluster with some GPUs (Torque)
  • gust - Main cluster for large memory / shared memory jobs (Slurm)
  • femto - Currently reserved for Physics and VIMS use (Slurm)
  • hurricane - Partially retired - no new accounts given (Torque)
  • meltemi - Currently exclusive to Physics / Sociology / Data-Science (Torque)
  • vortex-alpha - Currently exclusive to AidData (Torque)
VIMS campus/Chesapeake cluster
  • James - MPI/parallel cluster (Torque
  • Potomac - Serial /shared memory / small parallel jobs (Torque
  • Pamunkey - Shared memory only / exclusive to bio/bioinformatics calcs  (Torque
Main-Campus ("SciClone") Cluster
Name

Cores per node

Memory

(GB)

/local/scr Network OS Deployed

SciClone Kuro cluster

(3008 AMD Zen 4 cores)

Batch System: Slurm

kuro.sciclone.wm.edu

32 384 1.8 TB

10 GbE

HDR IB

Rocky 9.2 2024

ku01-ku47

64 384 980 GB

1 GbE

HDR IB

Rocky 9.2 2024

SciClone Astral Cluster

(32 CPU cores + 64 CPU cores/8 Nvidia A30 GPUs

Batch system: Slurm

astral.sciclone.wm.edu

32 256 63 GB

10 GbE

HDR IB

Rocky 9.2 2024

as01 (has 8x Nvidia A30 GPU (24GB)

64 512 14 TB

10 GbE

HDR IB

Rocky 9.2 2024

SciClone, Vortex and Vortex-alpha subclusters

(592 Opteron "Seoul" compute cores)

Batch system:  Torque

vortex.sciclone.wm.edu

(front-end/login)

12 32 318 GB

10 GbE,

FDR IB

RHEL 6.8

2014
vx01-vx28 517 GB

1 GbE,

FDR IB

CentOS 6.8

 

vx29-vx36 128
va01-va10 16 64 1.8 TB

1 GbE,

FDR IB [1]

2016

NOTE:  The Hurricane subcluster has been partially retired, no new accounts will be given access by default.

SciClone, Hurricane subcluster

(96 Xeon "Westmere" compute cores)

Batch system:  Torque

hurricane.sciclone.wm.edu

(front-end/login)

4 16 413 GB

10 GbE,

QDR IB

RHEL 6.8

2011
hu01-hu08

8

48 103 GB

1 GbE,

QDR IB

CentOS 6.8

hu09-hu12 197 GB

2012

SciClone, Bora and Hima subclusters

(1324 Xeon "Broadwell" cores / 2648 threads)

Batch system:  Torque

bora.sciclone.wm.edu

(front-end/login)

20

64 10 GB

10 GbE,

FDR IB

CentOS 7.3 2017
bo01-bo55 128 524 GB

1 GbE,

FDR IB

hi01-hi07

32

256 3.7 TB [2]

1 GbE,

QDR IB

SciClone Gust subcluster

(256 EPYC "Zen 2" cores)

Batch system: Slurm

gust.sciclone.wm.edu

(front-end/login)

32 32 64 GB

GbE,

EDR IB

CentOS 7.9 2020
gt01-gt02 128 512 670 GB

10 GbE,

EDR IB

SciClone, Femto subcluster

(960 Xeon "Skylake" cores)

Batch system:  Slurm

femto.sciclone.wm.edu

(front-end/login)

32 96 10 GB

1 GbE,

EDR IB

CentOS 7.6 2019

fm01-fm30

32 96 2 TB [3]

1 GbE,

EDR IB

CentOS 7.6 2019

SciClone, Meltemi subcluster

(6400 Xeon Phi "Knights Landing" cores / 25600 threads)

Batch system:  Torque

meltemi.sciclone.wm.edu

(front-end/login)

20c/

40t

128 192 GB

1 GbE,

100 Gb OP

CentOS 7.3 2017
mlt001-mlt100

64c/

256t

192 1.4 TB
VIMS CLUSTER

Chesapeake:  Potomac subcluster

(360 "Seoul" and compute cores)

Batch system: Torque

chesapeake.hpc.vims.edu

(front-end/login)

12 32 242 GB

10 GbE,

QDR IB

 

 

CentOS 7.9

 

2014
pt01-pt30 12 32 242 GB

1 GbE,

QDR IB

Chesapeake: James  and Pamunkey subclusters

(420 "Skylake" Xeon 128 "Abu Dhabi" Opteron compute cores)

Batch system: Torque

james.hpc.vims.edu 

(front-end/login)

8c/

16t

32 10 GB

10 GbE,

EDR IB

CentOS 7.5

 

2018

 

jm01-jm21

20c/

40t

64 1.8 TB

1 GbE,

EDR IB

pm01-pm02

64

256 1.3 TB [2]

10 GbE

CentOS 7.3

2016

choptank

(/ches/data10)

12c/

24t

64 800 GB

1 GbE,

QDR IB

RHEL 7

2016

rappahannock

(/ches/scr10)

32 176 GB

10 GbE,

QDR IB

2014

  1. va01-va10 are on a separate InfiniBand switch and have full bandwidth with each other, but are 5:1 oversubscribed to the main FDR switch.
  2. Usually a node's local scratch filesystem is a partition on a single disk, but the local scratch filesystems on Pamunkey and Hima nodes are faster (~300 and ~400 MB/s) six- and eight-drive arrays, respectively.
  3. The femto nodes are equipped with a 2TB SS Disk for fast reads and writes.