Close menu Resources for... William & Mary
W&M menu close William & Mary

Gaussian 16

Gaussian 16 is a state-of-the-art software suite which performs ab-initio electronic structure calculations within a Gaussian basis.  Here is a link that shows a summary of Gaussian 16 features and here is a link to the Gaussian 16 release notes.  W&M has purchased both the serial and fully-parallel versions of Gaussian as well as GaussView for use on the W&M HPC cluster.   

Currently, Gaussian 16 and GaussView are installed only on the bora, hima and vortex sub-clusters. Please email hpc-help@wm.edu if you need it to be installed on other sub-clusters.

The Gaussian 16 site license specifically states that users must have their primary affiliation with the institution named in the license (W&M).  Therefore, external collaborators will not have access to Gaussian or GaussView. 

Preparing to use Gaussian/GaussView on the HPC cluster

Users need to load the Gaussian/g16 module to use Gaussian 16 and/or GaussView.  This can be done by putting this line in your Torque batch script or in your start-up script for the vortex sub-cluster .cshrc.rhel6-opteron or .cshrc.el7-xeon for bora/hima.

Running Gaussian16

There are a few ways to run Gaussian 16 on the cluster:  serial (only one computing core), shared-memory parallel (using the cores in parallel on one node), distributed-memory parallel (using cores on multiple nodes) or shared-memory / distributed hybrid (multiple cores on multiple nodes where Gaussian's parallel execution environment, Linda, is used for communication between nodes but shared-memory is used within the nodes).  

Serial 

Here is a Torque batch script for serial Gaussian16 jobs:

#!/bin/tcsh 
#PBS -N GaussianSerial
#PBS -l nodes=1:vortex:ppn=1
#PBS -l walltime=0:60:00
#PBS -j oe

cd $PBS_O_WORKDIR
module load Gaussian/g16 # <---- add if not loaded automatically
g16 < input.com > test.out

Shared-memory parallel

This is an example of shared-memory parallel.  The same as the serial script except 1) multiple cores are specified (ppn=12) and 2) the extra '-p=<N>'  where <N> is the number of cores to use:

#!/bin/tcsh 
#PBS -N GaussianSMParallel
#PBS -l nodes=1:vortex:ppn=12 
#PBS -l walltime=0:60:00
#PBS -j oe

cd $PBS_O_WORKDIR
module load gaussian/g16 # <---- add if not loaded automatically
g16 -p=12 < input.com > test.out

Distributed-memory parallel:  

Here is an example script for running a distributed-memory parallel Gaussian 16 job.  The main differences between the serial and shared memory scripts are 1) 2 nodes are requested, each using 12 cores 2) The GAUSS_WDEF variable is used 3) the getlinda script is run with an argument of '1' to indicate distributed parallel is to be used for communication between all cores:

#!/bin/tcsh 
#PBS -N GaussianDMParallel
#PBS -l nodes=2:vortex:ppn=12 
#PBS -l walltime=0:60:00
#PBS -j oe

cd $PBS_O_WORKDIR
module load gaussian/g16 # <---- add if not loaded automatically
setenv GAUSS_WDEF `getlinda 1`
g16 < input.com > test.out

Hybrid shared/distributed memory parallel

This final approach combines the both of the last two examples.  Here, one distributed-memory process runs on each node and is responsible for communications between nodes and shared-memory is used for communication within each node.  This script contains the GAUSS_WDEF environmental variables and calls the getlinda script with an argument of '0' to indicate only to launch one distributed-memory process per node:

#!/bin/tcsh 
#PBS -N GaussianSMDMParallel
#PBS -l nodes=2:vortex:ppn=12 
#PBS -l walltime=0:60:00
#PBS -j oe

cd $PBS_O_WORKDIR
module load gaussian/g16 # <---- add if not loaded automatically
setenv GAUSS_WDEF `getlinda 0`
g16 -p=12 < input.com > test.out

Gaussian suggests that the hybrid method may be the faster option compared to full distributed-memory mode, however, users should test this on a sample calculation.

Running GaussView

GaussView is included in the W&M site license.  GaussView can be run from an interactive job on the vortex sub-cluster.  Users should not run GaussView on the vortex front-end unless obtaining prior permission from HPC staff.  Furthermore, GaussView can be installed on any W&M owned computer so feel free to contact hpc-help@wm.edu to request a copy of the software if you wish to install it on your W&M owned computer.

Since GaussView is a graphical program, you will need to login to the cluster with X11 forwarded:

ssh -X vortex.sciclone.wm.edu (or bora, hima)

on vortex, launch an interactive job.  Before this, you may want to check if a vortex node is available by using: 

20 [vortex] showbf -f vortex 
backfill window (user: 'ewalter' group: 'hpcf' partition: ALL) Fri Mar  1 13:17:08

267 procs available for       2:12:52
195 procs available for      17:50:47
189 procs available for      18:14:43
179 procs available for      20:22:38
169 procs available for    4:22:07:17
165 procs available for    5:20:31:44
154 procs available for      INFINITY

This shows that there are more than enough cores to run GaussView (it only requires 1).  So, to get an interactive job (with X11 forwarded) do:

qsub -I -l walltime=30:00 -l nodes=1:vortex:ppn=1 -X

This will put you on a vortex node with 1 core available for work.  Next, launch the GaussView program:

gview.sh

Then you should see the GaussView display appear.   

One thing to note is that GaussView shouldn't really be used to launch jobs on the cluster.  Just to read/write and analyze input/output files.  In order to run Gaussian 16 calculations, please run them in batch mode as above.

Please send email to hpc-help@wm.edu if there are any questions about running Gaussian 16 or GaussView.