NUSIT HPC

LAMMPS

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics program from Sandia National Laboratories.

It has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

LAMMPS 16 March 2018 version is compiled and available on the HPC systems. Please use the following template script (lammps_run) to submit your LAMMPS jobs to either parallel12 or parallel24 queues:

#!/bin/bash
#PBS –P project_lmp
#PBS -j oe
#PBS -N lmp_job1

###Submit jobs to queue parallel24 with 2*24 threads. 
#PBS -q parallel24
#PBS -l select=2:ncpus=24:mpiprocs=24:mem=20GB

###For submitting jobs to parallel12 queue to run on two nodes
###PBS -q parallel12
###PBS -l select=2:ncpus=12:mpiprocs=12:mem=10GB

cd $PBS_O_WORKDIR;   ## this line is needed, do not delete.
np=$(cat ${PBS_NODEFILE} | wc -l );  ### get number of CPUs
source /etc/profile.d/rec_modules.sh
module load lammps_gmp_2018;

export LAM=`which lmp_mpi`

mpirun -np $np -f ${PBS_NODEFILE} $LAM < lmp_job1.in > lmp_job1.log

The GPU version of LAMMPS 16 March 2018 is also compiled and available, please use this sample script (lmpgpu_run) to create and submit your LAMMPS GPU jobs. Please take note of the number of mpiprocs specified.

#!/bin/bash
#PBS –P project_lmp
#PBS -j oe
#PBS -N lmp_gpu

###Submit jobs to queue gpu using four GPUs on two nodes
#PBS -q gpu
#PBS -l select=2:ncpus=12:mpiprocs=2:mem=10GB

cd $PBS_O_WORKDIR;   ## this line is needed, do not delete.
np=$( cat  ${PBS_NODEFILE} | wc -l );  ### get number of CPUs
####---- LAMMPS Job Execution ---
source /etc/profile.d/rec_modules.sh
module load lammps_gpu
module load cuda7.5

mpirun -np $np -f ${PBS_NODEFILE} lmp_mpi -sf gpu -pk gpu 2 < in.gpujob > out.gpujob

Submit your batch job using the following command:

$ qsub  lammps_run

All batch jobs should be submit from the high performance workspace /hpctmp or /hpctmp2.

Please contact us at nusit-hpc@nus.edu.sg if you have any queries.