GAMESS - General Atomic and Molecular Electronic Structure System

Introduction

The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package.  See the GAMESS home page for documentation. The version installed on WestGrid systems is the one maintained by the Gordon group at Iowa State University, not GAMESS-UK

GAMESS has been used by a number of WestGrid researchers, some installing it in their own directories. WestGrid adminstrators have installed GAMESS for general use on Checkers, Lattice and Grex.  See system-specific notes below. See the GAMESS entry in the main WestGrid software table for information about the version installed on each system.

Licensing

We are allowed to expose the executables, documentation and test files to our users, but, access to the source code is restricted.

Please read the license conditions on the GAMESS website before using the software on WestGrid.

Running GAMESS on Checkers

On Checkers (checkers.westgrid.ca), the GAMESS environment is configured with the module command (as illustrated in the sample scripts below).  The module load gamess command is used to set up the environment for the default version of GAMESS. If the default version is not the one you want, type module avail to see the list of available versions, then, change your script to load the specific version you want (module load gamess/version_date).  See the WestGrid modules page for more information about modules.

As of this writing (2009-11-24), the installed version uses sockets rather than MPI for internode communication.  Although an MPI version was also tested and appeared to be faster than the sockets version, there are complications in how to request the computing resources when using the MPI version.  The scripts below are appropriate for the sockets version.  Please contact WestGrid technical support if you plan to use GAMESS, to make sure that there have not been significant changes in the way that it should be run.

An example of a minimal batch job script for running a serial version of the 01Oct2010 GAMESS code is:

#!/bin/bash
#PBS -S /bin/bash
#PBS -l walltime=24:00:00
module load gamess/01Oct2010
cd $PBS_O_WORKDIR
rungms inputfile 01

A minimal script for a parallel job using the 01Oct2010 GAMESS code is:

#!/bin/bash
#PBS -S /bin/bash
#PBS -l procs=xxxx
#PBS -l pmem=2gb
#PBS -l walltime=24:00:00
module load gamess/01Oct2010
cd $PBS_O_WORKDIR
ncpus=$(wc -l $PBS_NODEFILE | awk '{print $1}')
rungms inputfile 01 $ncpus

The above scripts show a "01" argument after the input file name on the rungms command line. This instructs rungms to run an executable named gamess.01.x. However, for the gamess.v25Mar2010 version (which, as of this writing, is the version loaded with module load gamess), the executable is named gamess.00.x, so, one should use rungms inputfile 00 rather than rungms inputfile 01.

 

Running GAMESS on Jasper

On Jasper (jasper.westgrid.ca), the GAMESS environment is configured with the module command.  

It uses MPI for internode communication rather than sockets and either 'procs' or 'nodes:ppn' form can be used for requesting cores. It is recommended to use the 'procs' form as it would tend to run sooner.

A minimal script for a parallel job using the 01May2012 GAMESS code is:

#!/bin/bash
#PBS -S /bin/bash
#PBS -l procs=xxxx
#PBS -l pmem=2gb
#PBS -l walltime=24:00:00
module load application/gamess/20120501
cd $PBS_O_WORKDIR
ncpus=$(wc -l $PBS_NODEFILE | awk '{print $1}')
rungms inputfile 01 $ncpus

 

Running GAMESS on Lattice

GAMESS has been configured on Lattice for MPI parallel execution. Good parallel performance for appropriate types of calculations and problem sizes should be expected. For example, a common Taxol benchmark scales well to at least 32 processors.

GAMESS on Lattice makes use of the MVAPICH 1.2 MPI-1 environment. To properly allocate processes, TORQUE jobs must be run using the rungms script provided. This script takes into account the GAMESS behavior of running one data communication and one computational process on each processor. The data communication process is lightweight and contributes little to the processor load. As noted in the Lattice QuickStart Guide, it is recommended that jobs on Lattice request whole nodes by using a TORQUE resource request of the form -l nodes=nn:ppn=8, with the processors per node parameter (ppn) equal to 8 and nn equal to the number of nodes. Do not use the procs resource request or the rungms script that is used to start GAMESS may not function correctly.

GAMESS scratch files are created in a sub-directory of the user's /global/scratch/$USER/gamess-scr directory. Each job uses a unique scratch directory named with the TORQUE job number. The scratch files are removed following each job whether it succeeds or fails, however it is good practice to check for scratch files following a run which terminates abnormally.

The rungms script writes important output files to the directory from which the job was submitted. The script will exit immediately, with a warning, if files exist from a previous run.

As an example, copy the Taxol GAMESS input files to a gtest directory:

mkdir ~/gtest
cd ~/gtest
cp /global/software/gamess/gamess2010oct01/sample/* .

The file taxol.pbs contains:

#!/bin/bash
GMS_PATH="/global/software/gamess/gamess2010oct01"
VER="03"
JOBNUM=$(basename $PBS_JOBID .hn001)
cd ${PBS_O_WORKDIR}
${GMS_PATH}/rungms taxol-6-31g ${VER} &> taxol-6-31g.${JOBNUM}.log

Note the version number 03 corresponds to the R3 release of GAMESS 2010-Oct-01, which is also the default. To submit the job using 24 processors (3 nodes with 8 processors per node):

qsub -l walltime=1:00:00 -l nodes=3:ppn=8 -l mem=30gb taxol.pbs

Adjust the mem parameter according to the number of whole nodes requested, requesting 10gb per node.  A copy of the output from this sample run is in taxol-6-31g.sample.log.

Running GAMESS on Grex

On Grex (grex.westgrid.ca), both sockets and MPI builds of GAMESS are available. The current (May 2012 R1) and previous (Aug. 2011 R2) versions of the code is installed. Note that the Aug. 2011 versions include the Dispersion Correction patches from Clemence Corminboeuf group. 

The procedure of running the sockets GAMESS jobs is similar to the one on Checkers cluster  (see above). The GAMESS environment is accessible via the module load gamess/VERSION. (check module avail for the version names). The sockets version of GAMESS can be invoked with rungms script and setting the $VERNO (The second parameter in the rungms script ) value to sockets. For MPI build of GAMESS, the rungms-impi is the script, and $VERNO should be set to mpi.  Both -l procs=xxx and -l nodes=yyy:ppn=zzz styles are possible for specifying the CPU resources in theTorque scripts. There is no assumptions on how you specify your CPU cores over the requested nodes for the sockets version of GAMESS. Input filese for GAMESS usually have the .inp extension (as in exam01.inp).

Behavior of the script can be influenced by environment variables.  By default, the script attempts to use local scratch directories on the nodes, which is the best choice in most cases. If there is not enough space there, one can set GLOBAL_SCRATCH environment variable. It will place temporary GAMESS files under /global/scratch/$USER/gamess-scratch. In order to specify custom basis sets, the EXTBAS environment variable can be set.

An example of job script for parallel, sockets GAMESS on Grex is below. Note that it uses 8 CPU cores and local scratch (GLOBAL_SCRATCH commented out -- this is recommended). A custom basis is supplied with the EXTBAS directive; you can comment that out if using only built-in basis sets.

#!/bin/bash
#PBS -l procs=8,mem=1000mb,walltime=2:00:00
#PBS -r n
#PBS -o /global/scratch/USERNAME/exam31-L22-global.out
#PBS -e /global/scratch/USERNAME//exam31-L22-global.err

cd $PBS_O_WORKDIR
module load gamess
ncpus=$(wc -l $PBS_NODEFILE | awk '{print $1}')
#export EXTBAS=/global/scratch/gshamov/devel/gamess-test/tests/laikov.basis
#export GLOBAL_SCRATCH=yes
rungms exam31-L22.inp sockets $ncpus
echo "all done!"

An MPI version example follows. This one uses nodes:ppn syntax, and attempts to save the log file immediately by redirecting the standard output:

#!/bin/bash
#PBS -l nodes=4:ppn=6,pmem=1000mb,walltime=2:00:00
#PBS -r n
#PBS -o /global/scratch/USERNAME/exam31-L22-mpi.out
#PBS -e /global/scratch/USERNAME//exam31-L22-mpi.err

cd $PBS_O_WORKDIR
module load gamess ; module load intel/impi
ncpus=$(wc -l $PBS_NODEFILE | awk '{print $1}')
rungms-impi exam31.inp mpi $ncpus &> /global/scratch/USERNAME/exam31-L22-mpi.log
echo "all done!


Updated 2012-05-18.