Close menu Resources for... William & Mary
W&M menu close William & Mary

Gaussian 16

Gaussian 16 is a state-of-the-art software suite which performs ab-initio electronic structure calculations within a Gaussian basis.  Here is a link that shows a summary of Gaussian 16 features and here is a link to the Gaussian 16 release notes.  W&M has purchased both the serial and fully-parallel versions of Gaussian as well as GaussView for use on the W&M HPC cluster.   

Currently, Gaussian 16 and GaussView are installed only on main-campus HPC clusters. Please email hpc-help@wm.edu if you need it to be installed somewhere else.

The Gaussian 16 site license specifically states that users must have their primary affiliation with the institution named in the license (W&M).  Therefore, external collaborators will not have access to Gaussian or GaussView. 

Preparing to use Gaussian/GaussView on the HPC cluster

Users need to load the gaussian/g16 module to use Gaussian 16 and/or GaussView.  This can be done by putting this line in your SLURM batch script or in your start-up script for the bora sub-cluster .cshrc..

Running Gaussian16

There are a few ways to run Gaussian 16 on the cluster:  serial (only one computing core), shared-memory parallel (using the cores in parallel on one node), distributed-memory parallel (using cores on multiple nodes) or shared-memory / distributed hybrid (multiple cores on multiple nodes where Gaussian's parallel execution environment, Linda, is used for communication between nodes but shared-memory is used within the nodes).  

Serial 

Here is a SLURM batch script for serial Gaussian16 jobs:

#!/bin/tcsh 
#SBATCH --job-name=GaussianSerial
#SBATCH -N 1 --ntasks-per-node 1
#SBATCH -t 6:00:00

module load gaussian/g16 # <---- add if not loaded automatically
g16 < input.com > test.out

Shared-memory parallel

This is an example of shared-memory parallel.  The same as the serial script except 1) multiple cores are specified (--ntasks-per-node 20) and 2) the extra '-p=<N>'  where <N> is the number of cores to use:

#!/bin/tcsh 
#SBATCH --job-name=GaussianSMParallel
#SBATCH -N 1 --ntasks-per-node 20
#SBATCH -t 6:00:00
module load gaussian/g16 # <---- add if not loaded automatically
g16 -p=20 < input.com > test.out
Distributed-memory parallel:  

Here is an example script for running a distributed-memory parallel Gaussian 16 job.  The main differences between the serial and shared memory scripts are 1) 2 nodes are requested, each using 20 cores 2) The GAUSS_WDEF variable is used 3) the getlinda script is run with an argument of '1' to indicate distributed parallel is to be used for communication between all cores:

#!/bin/tcsh 
#SBATCH --job-name=GaussianDMParallel
#SBATCH -N 2 --ntasks-per-node 20
#SBATCH -t 6:00:00

module load gaussian/g16 # <---- add if not loaded automatically
setenv GAUSS_WDEF `getlinda 1`
g16 < input.com > test.out

Hybrid shared/distributed memory parallel

This final approach combines the both of the last two examples.  Here, one distributed-memory process runs on each node (--ntasks-per-node 1) and is responsible for communications between nodes and shared-memory is used for communication within each node (--cpus-per-task 20).  This script contains the GAUSS_WDEF environmental variables and calls the getlinda script with an argument of '0' to indicate only to launch one distributed-memory process per node:

#!/bin/tcsh 
#SBATCH --job-name=GaussianDMParallel
#SBATCH -N 2 --ntasks-per-node 1 --cpus-per-task 20
#SBATCH -t 12:00:00
module load gaussian/g16 # <---- add if not loaded automatically
setenv GAUSS_WDEF `getlinda 0`
g16 -p=20 < input.com > test.out

Gaussian suggests that the hybrid method may be the faster option compared to full distributed-memory mode, however, users should test this on a sample calculation.

Running GaussView

GaussView is included in the W&M site license.  GaussView can be run from an interactive job on any main-campus HPC cluster, however, it is best run on of the serial/shared-memory clusters (hima, gust, astral or gulf) since it only runs on one core.  Users should not run GaussView on a front-end/login server unless obtaining prior permission from HPC staff.  Furthermore, GaussView can be installed on any W&M owned computer so feel free to contact hpc-help@wm.edu to request a copy of the software if you wish to install it on your W&M owned computer.

Since GaussView is a graphical program, you will need to login to the cluster with X11 forwarded from your local computer to the cluster. 

Once logged into the cluster  So, to get an interactive job (with X11 forwarded) do:

salloc -N1 -n1 -t 30 --x11

This will put you on a vortex node with 1 core available for work.  Next, launch the GaussView program:

gview.sh

Then you should see the GaussView display appear.   

Please send email to hpc-help@wm.edu if there are any questions about running Gaussian 16 or GaussView.