Hermes/Nestor QuickStart Guide

About this QuickStart Guide

This QuickStart guide provides a brief overview of the Hermes and Nestor clusters, indicating their role within WestGrid and highlighting some of the features that distinguish them from other WestGrid resources. It is intended to be read by new WestGrid account holders and by current users considering these systems.

For more detailed information about the hardware and performance characteristics, available software, usage policies and how to log in and run jobs, follow the links given below.

This system is also documented as part of the University of Victoria's Research Computing Facility.


Hermes is a capacity cluster geared towards serial jobs. The Hermes cluster consists of 84 nodes having 8 cores each and 120 nodes with 12 cores each, which gives a total of 2112 cores. Nestor is a capability cluster consisting of 288 nodes (2304 cores) geared towards large parallel jobs. The two clusters share infrastructure such as resource management, job scheduling, networked storage, and service nodes.

Both and are aliases for a common pool of head nodes named The Litai in Greek mythology were the personification of prayers from mortals to the gods. The choice of destination hostname when connecting to this facility has no bearing on what kind of jobs may be run: one may log in to and submit a parallel job to the Nestor cluster.



Each node in the Nestor and Hermes clusters is an IBM iDataplex server with eight 2.67-GHz Xeon x5550 cores with 24 GB of RAM. The newer 120 Hermes nodes are Dell C6100 servers with twelve 2.66-GHz Xeon x5650 cores and 24 GB of RAM.


The original 84 Hermes nodes use two bonded Gigabit/s Ethernet links (2 Gbit/s aggregate bandwidth) to get data from NFS and GPFS filesystems. The Hermes expansion nodes use 4X QDR instead with a 10:1 blocking factor.

Nestor nodes share data with each other and the GPFS filesystem over a high-speed InfiniBand interconnect (4X QDR non-blocking connections giving a 40 Gbit/s signal rate with a 32 Gbit/s data rate).


1.2 PB of storage is deployed to the clusters through the General Parallel File System (GPFS), a high-performance clustered file system that provides both fast data access and fault tolerance in cluster participants.  This storage provides both user home directories, scratch space for running jobs, and space for installed software.  Disk storage is backed up, where appropriate, to a dedicated backup system.

Disk usage is monitored and users are asked to stay within their quotas or request a storage allocation.

Key file spaces, their intended uses, backup policies and quotas are as follows:


  • /home/username is your home directory (assigned to the HOME environment variable).
  • Only essential data should be stored here, such as source code and processed results.
  • Backed up nightly; most recent backup once active copy deleted is stored for 180 days.
  • Quota: 300GB per user.


  • /global/scratch/username is your scratch directory. 
  • This is your work area for jobs.  Please use this for data sets and job processing.
  • This file area is not backed up.
  • Quota: 1TB per user.


  • This is where most software of user interest is installed, such as applications, analysis frameworks and support libraries.
  • A list of such software is available below, but for a current up-to-date list please use ls /global/software. Most of these software can be nicely accessed using modules.
  • This file area is backed up nightly.

/scratch: Local scratch space on the nodes

The first 84 Hermes nodes and all Nestor nodes have a 250GB drive with about 225GB available for local, non-persistent scratch space for the lifetime of the job. This is roughly 28GB of scratch space per core.

The new 120 Hermes nodes have 500GB storage space (~433GB is used for scratch).

The scratch space can be accessed via the envrionment variable TMPDIR.


See the main WestGrid software page for comparative tables listing the installed software on Hermes, Nestor and other WestGrid systems, including information about the operating system and compilers. As of August 1st, 2012, Nestor and Hermes are using an environment management software called modules to access most of the software. For more information about using modules, please check our modules environment page.

Some of the software installed includes (for a list of software available through modules, please issue module avail on the command line):

  • Intel Cluster Suite, including C, C++ and Fortran compilers as well as MKL
  • Abyss, TransAbyss, mothur, blast, fastqc, trimmomatic

Using Hermes and Nestor

To log in to Hermes and Nestor, connect to or using an ssh (secure shell) client. For more information about connecting and setting up your environment, see Setting up Your Computer.

As on other WestGrid systems batch jobs are handled by a combination of TORQUE and Moab software. For general information about submitting jobs, see Running Jobs.

Jobs are routed according to the resources requested, so that specifying a queue should for the most part be unnecessary. Jobs that request one node (or make no specific request) will be queued for Hermes nodes; jobs that request more than one node will be queued for execution on Nestor.

Queues may be explicitly requested using the -q <queue> notation on the qsub command line. The general-use queues are:

  • hermes - general Hermes-appropriate jobs
  • nestor - general Nestor-appropriate jobs

Wall time specification

Wall time is the amount of real time in which a job runs, regardless of the amount of CPU used or other factors. In other words, the amount of time recorded on a wall clock.

By default, jobs have a wall time of one minute. This encourages users to specify a more realistic wall time. Typically users estimate and multiply by three. To specify a wall time for a job, include the following directive at the top of the submission script (this example is for a 24-hour job):

#PBS -l walltime=24:00:00

Wall times enable prioritization and queuing based on the length of time resources will be consumed, and to some extent may be used by users to predict when their queued jobs may run.

The maximum walltime on Nestor and Hermes is 72 hours (3 days). For more information about the scheduling policies on Nestor and Hermes please check the Nesor/Hermes Job Scheduling page.

Processor specification

One may request a specific number of processors; processors chosen by the scheduler may be on any node. In this example, two processors are requested:

#PBS -l procs=2

One may also request multiple processors on a single node:

#PBS -l nodes=1:ppn=4

Finally, one may also request multiple processors on multiple nodes:

#PBS -l nodes=4:ppn=8

Jobs requesting 8 cores or less should run on hermes, and the others should run on nestor.

Memory specification

Each node has 24GB of memory, of which 1-2GB is used for the OS, depending on the image used. This leaves roughly 22GB of memory for jobs. The default amount of memory per job is 1024MB. To specify more, a resource directive like the following may be used (this example is of course for 2GB):

#PBS -l mem=2048mb

The mem parameter is the total memory limit for a job. For a parallel job, the pmem parameter can be used to specify a per-process memory requirement. For example:

#PBS -l procs=10,mem=20gb,pmem=2gb

This example requests 10 processors, with 2GB of memory per process, and 20GB total memory.

Using scratch space in your job

The usual usage of the node's local scratch space is to first copy the necessary files to $TMPDIR, perform processing, and copy the results back from $TMPDIR to your home or global scratch space, as appropriate.

Updated 2013-03-28.