MPI Parallel Computing

MPI parallel computing is available on Linux HPC Clusters Atlas4, Atlas5, Atlas6 and Atlas7.

MPI  C/C++ and MPI Fortran compiler are available on the cluster. The following are the sample instructions for compiling and running MPI program on the clusters.

Compile and build the program using MPI compiler.

MPI C: mpicc -o cprog.exe cprog.c
MPI C++: mpiCC -o cppprog.exe cppprog.cpp
MPI F77: mpif77 -o f77prog.exe f77prog.f
MPI F90: mpif90 -o f90prog.exe f90prog.f90

Submit and run the MPI program.
Using HPC clusters, MPI program needs to submit and run in an LSF batch parallel computing queue. The batch parallel queue on each cluster is as below.

Atlas4 atlas4_parallel
Atlas5 atlas5_parallel
Atlas6 atlas6_parallel, short_parallel
Atlas7 atlas7_parallel, short_parallel

You can submit the job using the following command:

bmpijob ./cprog.exe

This is equivalent to submit the job as:-

bmpijob -n 4 -o stdout.o ./cprog.exe

You can also use LSF’s bsub command directly, and submit a MPI job as:-

Atlas4> bsub -a mvapich -q atlas4_parallel -n 4 -o std-output.txt mpirun.lsf ./cprog.exe
Atlas5> bsub -a mvapich -q atlas5_parallel -n 8 -o std-output.txt mpirun.lsf ./cprog.exe
Atlas6> bsub -a mvapich -q atlas6_parallel -n 6 -o std-output.txt mpirun.lsf ./cprog.exe
Atlas7> bsub -a mvapich -q atlas7_parallel -n 12 -o std-output.txt mpirun.lsf ./cprog.exe

where:

bmpijob the MPI job submission command,
-n 4 indicates running job on 4 processor cores
-o stdout.o the file to store job information and standard output message
./cprog.exe to run MPI program “cprog.exe

To submit/run the program with more than 4 CPU Cores, you can use “-n” flag as:

bmpijob -n 32 ./cpgrog.exe

The maximum number of CPU cores allowed per MPI job is 32 at the moment. Once the job is completed, you can check the results at the same working directory.