Jasper QuickStart Guide

About this QuickStart Guide

This QuickStart guide gives a brief overview of the WestGrid Jasper facility, highlighting some of the features that distinguish it from other WestGrid resources. It is intended to be read by new WestGrid account holders and by current users considering whether to move to the Jasper system. For more detailed information about the Jasper hardware and performance characteristics, available software, usage policies and how to log in and run jobs, follow the links given below.

Introduction

The Jasper cluster is intended for general purpose serial and MPI-based parallel computing.

Hardware

Processors

Jasper is an SGI Altix XE cluster consisting of 240 nodes. Each node has 12 cores and 24 GB of RAM resulting in 2880 cores in total.

Interconnect

Jasper uses an InfiniBand 4X QDR (Quad Data Rate) 40 Gbit/s switched fabric, with a one to one blocking factor, making it the fastest interconnect currently in Westgrid.

Storage

A Lustre file system is attached to both Hungabee and Jasper. This file system is housed in an SGI IS16000 disk array holding 5 x 50-bay drive enclosures containing 250 x 2 TB Sata drives spinning at 7200 rpm and after RAID and volume configuration this provides a single 355 TB filesystem. This parallel file system is available to users on Jasper and Hungabee through the Infiniband interconnect.

Software

See the main WestGrid software page for tables showing the installed application software on Jasper and other WestGrid systems, as well as information about the operating system, compilers, and mathematical and graphical libraries.

Please write to WestGrid support if there is additional software that you would like installed.

Using Jasper

Getting started

Log in to Jasper by connecting to the host name jasper.westgrid.ca using an ssh (secure shell) client. For more information about connecting and setting up your environment, see the QuickStart Guide for New Users. In particular, the environment on Jasper is controlled using modules. Please see Setting up your environment with modules.

Disk space

Disk space and file quotas are enforced on Jasper home directories (there is no /global/scratch).  Default limits are as follows:

Disk space:  1.0 TB
File count:  500,000

Quotas can be exceeded by up to 25% for up to 72 hours.

To view your quota information, type:

lfs quota -u <your username> /lustre

If you require more than the default disk space, you should apply for a RAC allocation.  If you require more than the default file count, you should contact support@westgrid.ca.

 

Batch job policies

As on other WestGrid systems batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see Running Jobs

 

Resource
Policy or limit
Maximum walltime 72 hours
Maximum number of running jobs for a single user 2880
Maximum number of jobs submitted 5000
Maximum jobs in Idle queue 5

 

Interactive jobs

Except for compiling programs and small tests, interactive use of Jasper should be through the '-I' option to qsub.


Compiling and running programs

The latest Intel compilers are available on Jasper. The compiler commands are icc (C compiler), icpc (C++ compiler), and ifort (Fortran compiler). Please note that modules need to be loaded to use the compilers and MPI. In the sections below, basic use of the compilers is shown for OpenMP and for MPI-based parallel programs.  Additional compiler directives for optimization or debugging should often be used.

OpenMP programs

The Intel compilers include support for shared-memory parallel programs that include parallel directives from the OpenMP standard. Use the -openmp compiler option to enable this support, for example:

module load compiler/intel/12.1
icc -o prog -openmp prog.c
ifort -o prog -openmp prog.f90

Before running an OpenMP program, set the OMP_NUM_THREADS environment variable to the desired number of threads using bash-shell syntax:

export OMP_NUM_THREADS=12

or C-shell (tsch-shell) syntax:

setenv OMP_NUM_THREADS 12

according to the shell you are using. Then, to test your program interactively, launch it like you would any other:

./prog

Here is a sample TORQUE job script for running an OpenMP-based program. 

#!/bin/bash
#PBS -S /bin/bash
#PBS -l pmem=2000mb
#PBS -l nodes=1:ppn=12
#PBS -l walltime=12:00:00
#PBS -m bea
#PBS -M yourEmail@address

cd $PBS_O_WORKDIR

export OMP_NUM_THREADS=$PBS_NUM_PPN

./prog

MPI programs

MPI programs can be compiled using the compiler wrapper scripts mpicc, mpicxx, and mpif90. These scripts invoke the gnu compilers. For the Intel compilers, use mpiiccmpiicpc, and mpifort. Use the mpiexec command to launch an MPI program, for example:

module load compiler/intel/12.1
module load library/intelmpi/4.0.3.008
mpiicc -o prog prog.c
mpiexec -np 8 ./prog

After your program is compiled and tested, you can submit large-scale production runs to the batch job system. Here are some sample TORQUE batch job scripts for MPI-based programs.

#!/bin/bash
#PBS -S /bin/bash
#PBS -l pmem=2000mb
#PBS -l procs=30
#PBS -l walltime=12:00:00
#PBS -m bea
#PBS -M yourEmail@address

cd $PBS_O_WORKDIR

mpiexec ./prog > out

 


Updated 2012-05-25.