Close menu Resources for... William & Mary
W&M menu close William & Mary

Vortex Subcluster

This document contains basic information about SciClone's RHEL 6/Opteron computing platform which consists of a single subcluster, vortex.

Hardware
Front-end server node
Model:    Dell PowerEdge R515
Processor:   2 x Opteron 4334 six-core, 3.1 GHz
Memory:   32 GB 1600 MT/s
External interface:   vortex.sciclone.wm.edu, 10 Gb/s
Cluster interface:   vx00.sciclone.wm.edu, 10 Gb/s
InfiniBand interface:   vx00-i8.sciclone.wm.edu, 56 Gb/s (FDR)
OS:   Red Hat Enterprise Linux 6.2 (RHEL 6.2)
Default Compiler:   PGI 14.3
Job scheduler:   TORQUE 2.3.7
MPI Library:   MVAPICH2 1.9 (for InfiniBand)
Shell Environment:   tcsh with Modules 3.2.6
c18a compute nodes (28)
Model:    Dell PowerEdge R415
Processor:   2 x Opteron 4334, 3.1 GHz
Memory:   32 GB 1600 MT/s
Ethernet interface:   1 Gb/s
InfiniBand interface:   56 Gb/s (FDR)
Local scratch filesystem:   516 GB
OS:   Red Hat Enterprise Linux 6.2 (RHEL 6.2)
c18b compute nodes (8)
Model:    Dell PowerEdge R415
Processor:   2 x Opteron 4334, 3.1 GHz
Memory:   128 GB 1600 MT/s
Ethernet interface:   1 Gb/s
InfiniBand interface:   56 Gb/s (FDR)
Local scratch filesystem:   516 GB
OS:   Red Hat Enterprise Linux 6.2 (RHEL 6.2)
c18c compute nodes (10)
Model:    Dell PowerEdge R415
Processor:   2 x Opteron 4386, 3.1 GHz
Memory:   64 GB 1600 MT/s
Ethernet interface:   1 Gb/s
InfiniBand interface:   56 Gb/s (FDR)
Local scratch filesystem:   1.8 TB
OS:   Red Hat Enterprise Linux 6.2 (RHEL 6.2)

In the vortex subcluster, vx01-vx28 are designated as type c18a compute nodes, with the following TORQUE node properties:

c18a, c18x, c18d, el6, rhel6, compute, vortex

The large memory nodes, vx29-vx36, are designated as type c18b, with a similar set of node properties:

c18b, c18x, c18d, el6, rhel6, compute, vortex

The vortex-α nodes, va01-va10, are designated as type c18c, with some variations in their node properties:

c18c, c18x, el6, rhel6, compute, vortexa

Note that the memory size per processor core on the c18a nodes is 2.67 GB (32 GB / 12 cores). c18b nodes have 10.67 GB/core. These two node types are otherwise identical. The c18c nodes have twice as much memory as the c18a nodes, and more than three times as much local scratch. They also have four more cores, but those cores are reserved at all times, and so for general use va01-va10 can simply be treated as an additional set of 12-core nodes. You can use "c18x" to specify any of the c18a, c18b, and c18c nodes.

Nodes in the vortex subcluster are interconnected by a FDR (14 data rate) InfiniBand communication network, as well as Gigabit Ethernet.

 


User Environment

See the general notes on our shell environment. In the case of the vortex subcluster, .cshrc.rhel6-opteron is the relevant file. If you want to dig a little deeper, you can run "module avail" or "module whatis" to see a complete list of available modules. Most of these should be self-explanatory, but the isa module deserves a little more discussion.

Because SciClone (like many clusters) contains a mix of different hardware with varying capabilities, we need to build several different versions of most software packages, and then provide some way for the user to specify which version he or she wants to use. The primary distinction is based on the "Instruction Set Architecture" (ISA) of the particular platform, which is simply the set of instructions that the CPU is capable of executing, along with the desired addressing mode (32-bit or 64-bit).

The choice of ISA nomenclature is problematic, in part because the code names and marketing designations used by chip vendors are very complex, and also because there is little commonality in terminology across different compiler suites. Consequently, we have established our own local conventions. For the RHEL 6/Opteron platform on SciClone we presently support one ISA:

seoul

 - 

Opteron Bulldozer version 2 (Piledriver), 64-bit, matches the potomac compute nodes. This is the default.

When a user's shell initializes, an isa module is loaded which establishes a default environment based on the ISA (seoul in this case).

 


 

Compilers

There are three compiler suites available in SciClone's RHEL 6/Opteron environment. These are the Intel compilter suite, Portland Group compiler suite (PGI) and the GNU Compiler Collection (GCC)

You can switch between alternative compilers by modifying the appropriate "module load" command in your .cshrc.rhel6-opteron file. The default configuration loads pgi/14.3. Because of conflicts with command names, environment variables, libraries, etc., attempts to load multiple compiler modules into your environment simultaneously may result in an error.

For details about compiler installation paths, environment variables, etc., use the "module show" command for the compiler of interest, e.g.,

module show pgi/14.3
module show gcc/4.7.3

etc.

For proper operation and best performance, it is important to choose compiler options that match the target architecture and enable the most profitable code optimizations. The options listed below are suggested as starting points. Note that for some codes, these optimizations may be too aggressive and may need to be scaled back. Consult the appropriate compiler manuals for full details.

PGI 14.3 -fast -tp piledriver -m64 -Mprefetch
GCC 4.7.3 -O3 -march=bdver2 -m64