Using Amber on WestGrid Systems



Amber, together with AmberTools, is a suite of programs for molecular calculations, including molecular dynamics. The main programs for use on WestGrid systems are pmemd and sander.  For the types of analyses for which it can be used, pmemd is generally preferred as it has better parallel scaling and other optimizations that make it faster than sander.  Also, GPU acceleration is supported in pmemd on the WestGrid Parallel cluster. See the Amber web site for a description of these and other programs in the Amber and AmberTools packages.

WestGrid has purchased a licenses for Amber 10, 11 and 12.  Access to the Amber executables is available only to users who have agreed to the license conditions, as indicated in section below.

A few points about using Amber on WestGrid systems are given in the following. Like other jobs on WestGrid systems, Amber jobs are run by submitting an appropriate script for batch scheduling using the qsub command. See documentation on running batch jobs for more information. Please write to if you have questions about running this software that are not answered here.

License conditions and requesting access

Please review the relevant sections of the license terms at Then, if you agree, send an email to with the subject line "Amber access requested for your_user_name" and indicate that you have read and can abide by the conditions of use.  Your username will then be added to the wg-amber UNIX group that is used to control access to the software on some WestGrid systems.  As indicated below, access on Bugaboo is handled differently.

(Note, WestGrid has purchased a site license, so, the part of the license pages referring to fees and software orders is not relevant to using the software on WestGrid.)

Running Amber on Bugaboo

Amber 10 and 11 have been installed on Bugaboo.  To use Amber 10, please run the following command at least once interactively:

module load amber/10

You will be asked to accept the Amber license. After that you can use the "module load amber/10" command to access the software in job submission scripts.

Similarly, to use Amber 11, please run the following command at least once interactively:

module load amber/11

You will be asked to accept the Amber license. After that you can use the "module load amber/11" command to access the software in job submission scripts.

Running Amber on Checkers

Amber 10 has been installed on Checkers under /global/software/amber10.

Amber 11 has been installed in /global/scratch/software/amber11. It was built using Intel 11.1 compilers, MKL 10.2, and Intel MPI 3.2.1.

As of this writing (2011-02-15) there is no module available to set up the Amber environment, but, you can do that by

   1. Setting the environment variable AMBERHOME  /global/scratch/software/amber11
   2. Prepending $AMBERHOME/bin to PATH

For example, in a bash shell environment:

export AMBERHOME=/global/scratch/software/amber11

Running Amber on Grex

Amber 12 has been installed on Grex under /global/software/amber-12. Initialize the Amber environment using:

module load amber/12

Running Amber on Jasper

Amber 12 has been installed on Jasper under /global/software/amber/amber12. Initialize the Amber environment using:

module load application/amber/12

Running Amber on Lattice

Amber has been installed on Lattice and Parallel in version-specific subdirectories under /global/software/amber.

Please note that WestGrid accounts are not automatically set up on Lattice.  Instructions for obtaining an account are in the Lattice QuickStart Guide.

Amber 11 (serial and OpenMPI parallel) has been installed on Lattice under /global/software/amber/amber11+at15.  This is based on AmberTools1.5.  An older version with AmberTools 1.4 is in  /global/software/amber/amber11, but, presumedly, the newer versions should be preferred.

Two builds of Amber 12 are available in  /global/software/amber/amber12 and  /global/software/amber/amber12_mkl .  The latter version, which uses the Intel Math Kernel Library (MKL) was about twenty percent faster than the version built without MKL in initial testing. Note that binaries for Amber 11 are in a subdirectory called exe, but, in the Amber 12 release this was changed to bin.  Also, the Intel 12 compiler was used (rather than the Intel 11 compiler that was used for previous versions), so, an appropriate module must be loaded before running the code, as shown in the example script below.

There are complete manuals available on Lattice as PDFs in the doc subdirectory for each version.

To set up your environment the AMBERHOME variable needs to be set and some Amber programs require the PATH to be modified. For example, in bash, use:

export AMBERHOME=/global/software/amber/amber12_mkl

Only very short serial tests (a few minutes) should be run on the login node. Production calculations and parallel runs should be executed using the TORQUE/Moab batch system.  Here is an example batch job script.

#PBS -S /bin/bash

# Script for running Amber 12 pmemd (OpenMPI) on Lattice

echo "Current working directory is `pwd`"

NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."



# Amber 12 was compiled with the Intel 12 compiler, so, set up that environment:
module unload intel
module load intel/12

echo "PBS node file location: $PBS_NODEFILE"
echo "------------------"
echo "------------------"

echo "Starting run at: `date`"
mpiexec -n $NUM_PROCS pmemd.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD
echo "Finished run at: `date`"

To run the above script, if it is called amber.pbs, use the qsub command:

qsub -l nodes=1:ppn=8 -l walltime=0:10:0 amber.pbs

Note, the parallel (OpenMPI) version of the executable (pmemd.MPI) must be used in the mpiexec command.

It is critical that the number of processors requested for the job (nodes X ppn) matches the number of processes specified in the mpiexec command. Choosing an appropriate walltime will ensure the job is scheduled as soon as appropriate resources are available. Tests of parallel Amber jobs for a short simulation time should be run to determine an appropriate number of nodes to use and estimate the walltime required. In general whole nodes (ppn=8) are appropriate for most Amber calculations. More guidance on parallel usage is provided in the Amber and AmberTools User's Manuals. There are specific restrictions, such as "-n must be a multiple of 4", or "maximum -n is 12" for some packages.

Running Amber on Parallel

On Parallel, some of the compute nodes have general purpose graphics processing units (GPUs) that can be used to speed up Amber calculations. There is a discussion of the GPU-enhanced capabilities in Amber at

Please note that WestGrid accounts are not automatically set up on Parallel.  Instructions for obtaining an account are in the Parallel QuickStart Guide.

Lattice and Parallel share /global/software, so, most of the description for Lattice above also applies for Parallel.  However, there are a couple of important differences with respect to job submission:

  • Parallel has 12 cores per node, so, instead of ppn=8, one should use ppn=12.  See the Parallel QuickStart Guide for more information.
  • GPU-enabled nodes are not assigned by default, but, have to be requested using TORQUE queue and resource directives as explained on the WestGrid GPU Computation page.
  • The GPU-enabled binaries are the ones that have cuda in the name, for example, pmemd.cuda.MPI.
  • In addition to the Intel compiler-related modules, use module load cuda before running pmemd.cuda.MPI :

module unload intel
module load intel/12
module load cuda

mpiexec -n $NUM_PROCS pmemd.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD


Updated 2012-10-31.