Every node is connected to at least one Ethernet network, which is used primarily for system management, and almost every node has either an InfiniBand or Omni-Path interface, which provides high-bandwidth, low-latency communication for large parallel computations and I/O. To distinguish between interfaces on the same node, the interfaces are given different hostnames. For example, the log-in node for SciClone's Vortex subcluster has several hostnames, including
vortex.sciclone.wm.edu, for Ethernet traffic from outside the cluster,
vx00, for Ethernet traffic within the cluster, and
vx00-i8, for InfiniBand traffic within the cluster.
Its compute nodes, which do not have an external/internal interface distinction, have names like
vx05, referring to their connection to the cluster Ethernet network, and
vx05-i8, referring to their connection to the InfiniBand network. See the file
/etc/hosts on any of the server nodes for more information.
On Chesapeake, a distinction is made between the "internal" cluster Ethernet network and the "external" server node Ethernet network. Whereas on SciClone,
vx00 is just an alias for
vortex, on Chesapeake, a reference to
choptank instead of
ct00 from a compute node will result in traffic being routed along a longer and slower path from one subnet to another.
Therefore, even on SciClone, references (including logins and file transfer operations) initiated from outside the cluster should use the "external" hostnames, while references from within the cluster should use the "internal" hostnames.
That said, both Chesapeake and SciClone's "internal" Ethernet networks do use Internet-routable/public (
139.70.208.x) IP addresses, in order to accomodate special use cases like bulk data transfers and bandwidth-intensive applications such as visualization, with prior authorization. Contact the HPC group if you will need direct access to compute nodes.
Generally, network connections are full-speed within a subcluster, but may be oversubscribed to other subclusters and servers. The principal switches in SciClone's Ethernet network are:
|jsc02||Foundry BigIron RX-16||Typhoon, Whirlwind, Hurricane, Hima||10 Gb/s|
|jsg05||Foundry FWSX448||Vortex||10 Gb/s|
|(3) Dell S3048-ON||Meltemi||1 Gb/s|
|jsg06||(5) Dell PowerConnect 6248||Rain, Hail, Ice, Wind||20 Gb/s|
The RX-16 is SciClone's core switch. Its uplink is to the campus backbone, and the other switches' uplinks are to it, with the exception of
jsg08, which currently uplinks to
In addition, all of the compute nodes except Meltemi (which is connected with Omni-Path) and most of the servers are connected at speeds ranging from 20 to 56 Gb/s by an InfiniBand network comprised of the following switches:
|Hail, Ice, Wind||data10||
SciClone shares main campus' 10 Gb/s route to the VIMS campus, where Chesapeake is interconnected with a 100 Mb/s to 10 Gb/s Ethernet network and a 40 Gb/s QDR InfiniBand network.