LAMMPS

LAMMPS on SeaWulf

Overview

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics program from Sandia National Laboratories. LAMMPS can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.

Key features of LAMMPS include:

  • Runs on single processors or in parallel using message-passing techniques
  • Distributed as an open-source code under the terms of the GPL
  • Capable of simulating millions to billions of particles
  • Support for GPU acceleration on compatible hardware
  • Extensible via user-contributed packages for specialized applications

Enabling the Software

SeaWulf provides several LAMMPS builds optimized for different architectures and with various compiler combinations. Load LAMMPS using the module load command with the appropriate version:

module load lammps/<compiler>/<version>

For example, to load the Intel 2024.0 build from August 29, 2024:

module load lammps/intel/2024.0/29Aug2024

For GPU-accelerated builds:

module load lammps/a100-gpu/2Aug2023
Note: Different LAMMPS modules are available on different node architectures (Haswell, Skylake, Milan). Be sure to select a version compatible with your allocated compute nodes.

Available Versions

LAMMPS is available in multiple versions compiled with different compilers and optimized for specific architectures:

Haswell Nodes

Multiple versions available, including:

  • GPU builds
  • GCC builds with different MPI implementations
  • AOCC builds for AMD processors
  • Intel builds from various release dates
  • Specialized builds (e.g., VCSGC implementation)

Skylake Nodes

  • A100 GPU Builds:
    • lammps/a100-gpu/2Aug2023
    • lammps/a100-gpu/23Jun2022
    • lammps/a100-gpu/29Sep2021
  • GCC Builds:
    • lammps/gcc13.2/mvapich2.3.7/2Aug2023
  • Intel Builds:
    • lammps/intel/2022.2/29Oct2020
    • lammps/intel/2022.2/03Aug2022
    • lammps/intel/2023.1/2Aug2023
    • lammps/intel/2024.0/29Aug2024

Milan Nodes

  • AOCC Builds:
    • lammps/aocc/4.0/2Aug2023
    • lammps/aocc/4.0/29Aug2024
  • Intel Builds:
    • lammps/intel/2023.1/2Aug2023

To see all available versions on your current architecture:

module avail lammps
Note: Version numbers typically follow a date format (e.g., 2Aug2023, 29Aug2024) indicating the release date of that LAMMPS build. Newer versions generally contain bug fixes and performance improvements.

Enabled Packages

Package Description
BUILD_TOOLS Build tools for managing and compiling LAMMPS
PKG_SMTBQ Provides support for the SMTBQ package
PKG_MC Monte Carlo methods for simulations
PKG_MEAM Modified Embedded Atom Method potential
PKG_VORONOI Computes Voronoi tessellations
PKG_MANYBODY Support for many-body potentials
PKG_MOLECULE Support for molecular simulations
PKG_PYTHON Enables Python scripting interface
PKG_COMPRESS Enables support for compressed file I/O
PKG_OPT Performance optimization package
PKG_KSPACE Long-range electrostatics and van der Waals interactions
PKG_QEQ Charge equilibrium method for atomic systems
PKG_REACTION Reactive force fields for chemical reactions
PKG_RIGID Tools for rigid body dynamics simulations
PKG_BODY Support for complex body simulations
PKG_SPH Smoothed Particle Hydrodynamics (SPH) package
PKG_COMPRESS Data compression tools
PKG_GRANULAR Granular material simulations
PKG_DIPOLE Polarization and dipole interactions
PKG_SHOCK Shockwave and impact simulations
PKG_SRD Stochastic rotation dynamics for mesoscale flows
PKG_CORESHELL Core-shell particle interactions
PKG_VORO Voronoi-based analysis tools
PKG_REPLICA Methods for replica exchange simulations
PKG_PERI Peridynamics modeling package
PKG_MISC Miscellaneous features and utilities
PKG_OPENMP Enables OpenMP parallelization
WITH_PNG Enables PNG image support
WITH_JPEG Enables JPEG image support
FFT=MKL Uses Intel MKL for FFT computations
PKG_PYTHON Python bindings for LAMMPS
PKG_MPIIO Enables MPI-IO support for parallel I/O
Note: Packages will vary between versions and targeted architectures. Use the command lmp_mpi -h to see the full list of packages for the loaded LAMMPS version. If you need to utilize a different package, please check the other installations before reaching out to HPC support to request a new LAMMPS build.

Basic Usage

After loading the module, you can run LAMMPS using the following basic syntax:

mpirun -np <number_of_processes> lmp -in <input_file>

Key Parameters

Parameter Description
-in <file> Read input from specified file
-log <file> Write log information to specified file
-screen <file> Write screen output to specified file
-var <name> <value> Set a variable in the input script
-echo <style> Control how input script commands are echoed

GPU Acceleration (for GPU-enabled builds)

When using a GPU-enabled build, you can control GPU usage with these parameters in your input script:

package gpu 1 neigh yes newton on

Where:

  • 1 is the number of GPUs per node to use
  • neigh yes performs neighbor list calculations on the GPU
  • newton on controls Newton's third law settings
Important: For optimal performance on the SeaWulf cluster, match the number of MPI processes to the resources allocated in your job script. For GPU jobs, typically use 1 MPI process per GPU.

Examples

Basic CPU Run

mpirun -np 40 lmp_mpi -in input.lammps -log log.lammps

Runs a LAMMPS simulation using 40 MPI processes with the specified input file.

GPU-Accelerated Run

mpirun -np 4 lmp_gpu -in input.lammps -log log.lammps -var GPU 1

Runs a LAMMPS simulation using 4 MPI processes (typically one per GPU) with GPU acceleration. The input script should include the appropriate GPU package commands.

Example Job Script (CPU)

#!/bin/bash
#SBATCH --job-name=lammps_cpu
#SBATCH --output=lammps_%j.out
#SBATCH --error=lammps_%j.err
#SBATCH -p short-40core
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks=40

# Load the LAMMPS module
module load lammps/intel/2024.0/29Aug2024

# Define input file
INPUT="input.lammps"
LOG="log.lammps"

# Run LAMMPS with all allocated cores
mpirun -np $SLURM_NTASKS lmp_mpi -in $INPUT -log $LOG

Example Job Script (GPU)

#!/bin/bash
#SBATCH --job-name=lammps_gpu
#SBATCH --output=lammps_%j.out
#SBATCH --error=lammps_%j.err
#SBATCH --time=2:00:00
#SBATCH --nodes=1
#SBATCH -p a100
#SBATCH --gres=gpu:4
#SBATCH --ntasks=4

# Load the LAMMPS module with GPU support
module load lammps/a100-gpu/2Aug2023

# Define input file
INPUT="input.lammps"
LOG="log.lammps" # Run LAMMPS with one process per GPU

mpirun -np $SLURM_NTASKS lmp_gpu -in $INPUT -log $LOG

Documentation

Official Documentation

Complete manual with all available commands and detailed explanations.

LAMMPS Documentation

Tutorials

Step-by-step guides and examples for beginners and advanced users.

LAMMPS Tutorials

Source Code

GitHub repository for the LAMMPS project.

GitHub Repository

Support

SeaWulf HPC Support

For issues related to running LAMMPS on the SeaWulf cluster, please contact the HPC support team:

LAMMPS Community Support

For questions specific to the LAMMPS software: