ORCA on WestGrid Systems

Introduction

As described on the ORCA home page at http://www.thch.uni-bonn.de/tc/orca/  is an electronic structure package "with specific emphasis on spectroscopic properties of open-shell molecules" and "features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods."

ORCA has been installed on the WestGrid Lattice and Grex clusters. Access is restricted to approved users only.

Requesting Access

WestGrid has an agreement with the ORCA distributors that we may provide access to the software on our servers to a researcher if

  1) The prospective user has read and can agree to the conditions of the license posted at http://www.thch.uni-bonn.de/tc/orca/ .

and
 
  2) At least one member of his or her research group has registered with the ORCA group at that site.

If you have fulfilled the above conditions and would like access to ORCA on WestGrid systems, please write to support@westgrid.ca with a subject line:

  ORCA access request (your_WestGrid_username)

In your note, please confirm that you have read and agree to the license conditions and let us know who in your group has registered on your behalf, if you have not done so yourself.

Upon receipt of your note, your WestGrid username will be added to the wg-orca UNIX group that is used to control access to the software.

Site-specific Notes

Lattice and Grex

ORCA has been installed in a version-specific directory under /global/software/orca. Check there for the version you would like to use.

Note that on Grex, the most recent version of ORCA, 2.9.0, is available as the standard module command (module load orca).

Here is a sample batch job script, orca.pbs, which is also available as /global/software/orca/examples/orca.pbs .

#!/bin/bash
#PBS -S /bin/bash

# Sample ORCA script.
# 2011-03-04 DSP
# In this version, the program will be run in the same directory
# as this script. (No attempt is made to copy files to and from
# storage local to the compute nodes.)

# Specify the ORCA input (.inp) file.
# Note any PAL directives in the file are ignored.
# The number of parallel processes to use
# will be taken from the TORQUE environment.

ORCA_RAW_IN=orca.inp

# Specify an output file

ORCA_OUT=orca_${PBS_JOBID}.out

cd $PBS_O_WORKDIR

echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
cat $PBS_NODEFILE
echo "---------------------"
NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

# Create a temporary input by copying the
# raw input file specified above and then
# appending a line to specify the number of
# parallel processes to use.

echo "Creating temporary input file ${ORCA_IN}"

ORCA_IN=${ORCA_RAW_IN}_${PBS_JOBID}
cp ${ORCA_RAW_IN} ${ORCA_IN}

cat >> ${ORCA_IN} <<EOF
%PAL nprocs $NUM_PROCS end
EOF

# The orca command should be called with a full path
# and the other executables should be on command PATH.

ORCA_HOME=/global/software/orca/orca_2_9_0_linux_x86-64/
ORCA=${ORCA_HOME}/orca
export PATH=${ORCA_HOME}:$PATH

# Define the variable RSH_COMMAND for communication
# between nodes for starting independent calculations
# as described in the user manual, section 3.

export RSH_COMMAND="/usr/bin/ssh"

echo "Starting run at: `date`"
$ORCA ${ORCA_IN} > ${ORCA_OUT}
echo "Job finished at: `date`"

Change the ORCA input file name, orca.inp, to match your own input file and submit the job with qsub.  On Lattice,  we prefer you use whole nodes by specifying ppn=8 and adjusting the number of nodes used to the appropriate level based on the parallelization characteristics of ORCA for your type of calculation.  For example:

qsub -l nodes=2:ppn=8,walltime=72:00:00 orca.pbs

On Grex, there are 12 cores per node, so, you may use up to ppn=12.  If using less than a whole node, you should specify a pmem (memory per process) parameter so that the scheduler will know how much of a node's resources is available for other users. Usually there is no need to restrain oneself to one node only; and the flexible -procs specification might give you shorter  queuing times. Some ORCA jobs, such as Coupled Cluster or CI or MC-SCF computations use a lot of disk space. Therefore, it is recommended to specify the file resource as well.  For example

qsub -l procs=6,pmem=4gb,walltime=72:00:00,file=30gb orca.pbs

Updated 2012-03-28.