Close menu Resources for... William & Mary
W&M menu close William & Mary

James

Hardware

Front-end

(james/ jm00)

Parallel nodes

(jm01-jm21)

Model Dell PowerEdge R440
Processor(s)

2×10-core

Intel Xeon Silver 4112

2×10-core

Intel Xeon Silver 4114

Clock speed

2.6 GHz

2.2 GHz 

Memory 32 GB 64 GB

Network

interfaces

Application

EDR IB (jm00-ib)

EDR IB (jm??-ib)

System

10 GbE (jm00)

1 GbE (jm??)

OS CentOS 7.5

Torque Node Specifiers:

All james compute nodes have the same torque node specifiers:

el7,compute,xeon,skylake,james,nocmt,ibedr

Although this list has many specifiers, most users will be fine just using jamese.g.:

#!/bin/tcsh
#PBS -N test
#PBS -l walltime=1:00:00
#PBS -l nodes=4:james:ppn=20
#PBS -j oe

User Environment

Compilation and job submission for the potomac subcluster is from the james server node. To log in, use SSH from any host on the William & Mary or VIMS networks (including the VIMS VPN), and connect to james.hpc.vims.edu with your HPC username and password (common across SciClone, and Chesapeake).

Your home directory is common to the Chesapeake and James servers as well as all potomac and james compute nodes as well as the service nodes choptank, rappahanock, and york.  Just as on chesapaeake and potomac, there are also two scratch filesystems available throughout the system, /ches/scr00 and /ches/scr10. /ches/scr00 is a 40 GB, medium-performance scratch disk. /ches/scr10 is a 60 TB high-performance Dell HPC NSS fileserver.

James uses Environment Modules (a.k.a Modules) to automatically configure the user's shell environment across multiple computing platforms, as well as to organize the dozens of different software packages which are available on the system. We support tcsh as the primary shell environment for user accounts and applications.

New accounts are provisioned with the following set of environment configuration files:

.login

 - 

recommended settings for login shells

.cshrc

 - 

personal environment settings, customize to meet your needs

.cshrc.el7-x86_64

 - 

personal settings james, choptank and rappahanock.

The most recent versions of theses files can be found in /usr/local/etc/templates on the james server node.

System-wide environment settings are initialized in:

/usr/local/etc/chesapeake.cshrc
/usr/local/etc/chesapeake.login

These files are automatically invoked at the beginning of your personal .cshrc and .login files, respectively.

A default set of environment modules is loaded at the end of the platform-specific .cshrc.* files, and these should be enough to get you started. In the case of the potomac subcluster, .cshrc.rhel6-opteron is the relevant file. If you want to dig a little deeper, you can run "module avail" or "module whatis" to see a complete list of available modules. 

Unlike on Chesapeake, the isa/skylake module (meant for the james cluster) is loaded by default and is not needed in your .cshrc.el7-x86_64 personal startup module files. 

Compiler flags

James has the Intel Parallel Studio XE 2017 compiler suite as well as version 4.9.4 of the GNU compiler suite. :

PGI C pgcc -tp skylake -O2 -fast -m64 -Mprefetch
C++ pgc++ -O2 -tp skylake -fast -m64 -Mprefetch
Fortran pgfortran -O2 -tp skylake -fast -m64 -Mprefetch
Intel C icc -O3 -xSKYLAKE-AVX512 -mtune=skylake -fma -align -finline-functions
C++ icpc -std=c11 -O3 -xSKYLAKE-AVX512 -mtune=skylake -fma -align -finline-functions
Fortran ifort -O3 -xSKYLAKE-AVX512 -mtune=skylake -fma -align array64byte -finline-functions
GNU C gcc -march=skylake -O3 -mfma -malign-data=cacheline -finline-functions
C++ g++ -std=c11 -march=skylake -O3 -mfma -malign-data=cacheline -finline-functions
Fortran

gfortran -march=skylake -O3 -mfma -malign-data=cacheline -finline-functions


MPI

 Mvapich2 version 2.3 is available on James compiled with all the available compiler chains - intel/2017, intel/2018, gcc/6.3.0 and pgi/18.7. OpenMPI 3.1.2 is also available with gcc/6.3.0, intel/2017 and intel/2018 version. Parallel jobs should be run using the mvp2run wrapper script, which has been updated to include James sub-cluster nodes.


#!/bin/tcsh 
#PBS -N MPI 
#PBS -l nodes=5:james:ppn=20 
#PBS -l walltime=12:00:00 
#PBS -j oe 

cd $PBS_O_WORKDIR 

mvp2run ./a.out >& LOG