Node Types and Subclusters

SciClone and Chesapeake are clusters of many individual computers ("nodes"), so before you can run a job on SciClone or Chesapeake, you must decide on which particular member computers your job will run (often one or more groups of similar computers, called subclusters). The table below is intended to assist you with this decision, after which you can learn how to submit a job for a particular subcluster on its linked subcluster page.

Please contact the HPC group for advice if you are unsure of the most appropriate type of node for your particular application.

Name

Cores

per node

Memory

(GB)

/local/scr

Cluster

interface

OS Deployed

SciClone, Rain subcluster

(744 Opteron "Santa Rosa" compute cores)

rain

(front-end)

4 8 10 GB

1 GbE,

DDR IB

CentOS 7.3

2008

ra001-ra102

8 128 GB
ra103-ra106 32 440 GB
ra107 8
ra108-ra114 16
ra115-ra174 8 48 GB

1 GbE,

DDR IB

2007

ra175-ra186 24

SciClone, Ice, Wind, and Hail subclusters

(304 "Shanghai" and 496 "Magny-Cours" Opteron compute cores)

storm

(front-end)

12 16 45 GB

10 GbE,

QDR IB

RHEL 6.7 2013
ha01-ha28 8 16 440 GB

1 GbE,

QDR IB

2010
ha29-ha36 32 899 GB
ha37-ha38 64
ice01 48 96 440 GB
ice02 32 64 898 GB
wi01-wi26 16 32 440 GB

SciClone, Hurricane and Whirlwind subclusters

(512 Xeon "Westmere" compute cores + GPUs)

hurricane

(front-end)

4 16 413 GB

10 GbE,

QDR IB

RHEL 6.8

2011
hu01-hu08

8 CPU,

896 GPU

48

main,

(2×)

5.25

GPU

103 GB

1 GbE,

QDR IB

CentOS 6.8

hu09-hu12 197 GB 2012
wh01-wh44 8 64 479 GB
wh45-wh52 192

SciClone, Vortex and Vortex-alpha subclusters

(592 Opteron "Seoul" compute cores)

vortex

(front-end)

12 32 318 GB

10 GbE,

FDR IB

RHEL 6.2

2014
vx01-vx28 517 GB

1 GbE,

FDR IB

vx29-vx36 128
va01-va10 16 64 1.8 TB

1 GbE,

FDR IB [1]

2016

SciClone, Bora and Hima subclusters

(1092 Xeon "Broadwell" cores / 2184 threads)

bora

(front-end)

20c/

40t

64 10 GB

10 GbE,

FDR IB

CentOS 7.3 2017
bo01-bo45 128 524 GB

1 GbE,

FDR IB

hi01-hi06

32c/

64t

256 3.7 TB [2]

1 GbE,

QDR IB

SciClone, Meltemi subcluster

(6400 Xeon Phi "Knights Landing" cores / 25600 threads)

meltemi

(front-end)

20c/

40t

128 192 GB

1 GbE,

100 Gb OP

CentOS 7.3 2017
mlt001-mlt100

64c/

256t

192 1.4 TB
SciClone, back-end servers

tornado

(scr30)

12 24 89 GB

10 GbE

RHEL 6 2011

twister

(data20)

8 48 17 GB

10 GbE,

QDR IB

gale

(bkup10)

8 32 197 GB

10 GbE,

FDR IB

2015

breeze

(scr10)

8 64 393 GB

1 GbE,

FDR IB

RHEL 7

2016

mistral

(scr-mlt)

20c/

48t

64 8 GB

1 GbE,

100 Gb OP

CentOS 7.3 2017

tempest

(data10)

24c/

48t

128 10 GB

10 GbE,

QDR IB [3]

RHEL 7.3 2017

Chesapeake, Potomac and Pamunkey subclusters

(360 "Seoul" and 128 "Abu Dhabi" Opteron compute cores)

chesapeake

(front-end)

12 32 242 GB

10 GbE,

QDR IB

RHEL 6.2

2014

york

(interactive/

MATLAB

server)

48 128 89 GB
pt01-pt30 12 32 242 GB

1 GbE,

QDR IB

pm01-pm02 64 256 1.3 TB [2]

10 GbE

CentOS 7.3

2016

choptank

(/ches/data10)

12c/

24t

64 800 GB

1 GbE,

QDR IB

RHEL 7

rappahannock

(/ches/scr10)

32 176 GB

10 GbE,

QDR IB

2014

  1. va01-va10 are on a separate InfiniBand switch and have full bandwidth with each other, but are 5:1 oversubscribed to the main FDR switch.
  2. Usually a node's local scratch filesystem is a partition on a single disk, but the local scratch filesystems on Pamunkey and Hima nodes are faster (~300 and ~400 MB/s) six- and eight-drive arrays, respectively.
  3. tempest.sciclone.wm.edu has an FDR card, but is attached to a QDR switch.