5. Equilibrium molecular dynamics

Prepare the TPR input file based on the last frame of the Position restraints MD with grompp:

grompp -f md.mdp -p ../top/4ake.top -c ../posres/posres.pdb -o md.tpr

The md.mdp file uses different algorithms from the Position restraints MD for the temperature and pressure coupling, which are known to reproduce the exact NPT ensemble distribution.

You can run this simulation on saguaro and/or on your local workstation [1].

5.1. Running on saguaro

Log into saguaro (where USERNAME is your saguaro login, see also How to login to saguaro) [1]:

ssh -l USERNAME saguaro.fulton.asu.edu

and create a AdK directory in your scratch space (if possible you should always run from a scratch directory under /scratch/$USER [4]):

cd /scratch/$USER
mkdir AdK

From your workstation, transfer the MD files to your saguaro scratch directory (replace USERNAME with your login name):

scp -r MD USERNAME@saguaro.fulton.asu.edu:/scratch/USERNAME

On saguaro,

cd /scratch/$USER/AdK/MD

Run simulations on saguaro on 32 cores with the -npme 4 option [3] for 100 ps. Create a submission script saguaro.pbs similar to the one below [2]:

#!/bin/bash
#PBS -N AdK
#PBS -l nodes=32
#PBS -l walltime=00:20:00
#PBS -A phy598s113
#PBS -j oe
#PBS -o md.$PBS_JOBID.out

# host: saguaro
# queuing system: PBS

# max run time in hours, 1 min = 0.0167
WALL_HOURS=0.333

DEFFNM=md
TPR=$DEFFNM.tpr

LIBDIR=/home/obeckste/Library

cd $PBS_O_WORKDIR

. $LIBDIR/Gromacs/versions/4.5.5/bin/GMXRC
module load openmpi/1.4.5-intel-12.1

MDRUN=$LIBDIR/Gromacs/versions/4.5.5/bin/mdrun_mpi

# -noappend because apparently no file locking possible on Lustre (/scratch)
mpiexec $MDRUN -npme 4 -s $TPR -deffnm $DEFFNM -maxh $WALL_HOURS -cpi -noappend

Submit the job:

qsub saguaro.pbs

This should not take longer than 20 mins. When done (check the log file that you have completed 100 ps), rename the output files:

mv md.part0001.gro md.gro
mv md.part0001.edr md.edr
mv md.part0001.xtc md.xtc
mv md.part0001.log md.log

(Don’t bother renaming the files if you need to perform a continuation run as described below.)

Copy the files back to the workstation (on the workstation):

scp -r USERNAME@saguaro.fulton.asu.edu:/scratch/USERNAME/AdK/MD/* MD/

and analyse locally on the workstation.

Continuation runs

If your job ran into a time limit and it was killed by the queuing system before it completed all steps then you can simply launch the simulation again from the same directory in order to continue the run. The checkpoint file md.cpt must be present and you will later need all output files md.partNUMBER.EXT such as md.part0001.xtc, md.part0002.xtc, md.part0001.edr, ... Simply run the queuing script again:

qsub saguaro.pbs

The continuation works with the -cpi flag of mdrun. Unfortunately, on saguaro we also have to use the -noappend flag, which writes separate files for each continuation run (-append would append trajectories on the fly, i.e. you would only have files md.xtc, md.edr, md.log... in your run directory). When a run with -noappend is complete you have to use trjcat and eneconv to produce the final trajectory:

trjcat -f md.part*.xtc -o md.xtc
eneconv -f md.part*.edr -o md.edr
cat md.part*.log > md.log
mv md.part*.gro md.gro

Check that the new trajectory is roughly the size of the combined parts ls -la and then delete the parts.

Warning

Make sure that you have correctly assembled your complete trajectory. It is costly to rerun your simulation!

You can also use gmxcheck to verify that your assembled trajectories are in order.

If you are positive that you don’t need the parts anymore, delete them:

rm md.part*.*

Finally, copy back your files to your workstation for further Analysis.

5.2. Running on your local workstation

If your workstation has a decent number of cores or if you simply don’t mind waiting a bit longer you can also run the simulation as usual:

mdrun -v -stepout 10 -s md.tpr -deffnm md -cpi

This will automatically utilize all available cores. The -cpi flag indicates that you want Gromacs to continue from a previous run. You can kill the job with CONTROL-C, look at the output, then continue with exactly the same command line

mdrun -v -stepout 10 -s md.tpr -deffnm md -cpi

(Try it out!). The -cpi flag can be used on the first run without harm. For a continuation to occur, Gromacs needs to find the checkpoint file md.cpt and all output files (md.xtc, md.edr, md.log) in the current directory.

Footnotes

[1](1, 2) If you are not at ASU then you are unlikely to have access to saguaro. You will have to adapt the recipe for your own supercomputer (ask a local expert for help) or just run the simulation on your workstation.
[2]If you have a login on saguaro but you are not a student of the PHY494/PHY598/CHM598 — Simulation approaches to Bio- and Nanophysics class then you will need to change (at a minimum) the account to which your CPU-hours will be billed: change the line #PBS -A account so that you are using your research group’s account.
[3]

The -npme NODES flag to mdrun is a performance optimization that only becomes important when running on larger numbers of cores (>11). In most cases you do not need to worry about it and either set -npme 0 or simply don’t supply the option. For larger simulations (e.g. if you are using Gromacs for your own projects you will want to optimize this setting with the help of the g_tune_pme utility.)

For reference, with -npme 4 on 32 cores on saguaro, the performance for the AdK system was 19 ns/d or 1.2 h/ns (about 6 min for 100 ps).

[4]Scratch directories on saguaro have (nearly) unlimited space but are rigorously purged of all files older than 30 days. They are not backed up. You must copy all data off to your local work station or loose your data.

Table Of Contents

Previous topic

4. Position restraints MD

Next topic

6. Analysis

This Page