Overview
Intel MPI (Message Passing Interface) is a high-performance, scalable parallel programming library used to develop parallel applications across distributed systems. On SeaWulf, Intel MPI is available through both standalone modules and as part of Intel's oneAPI toolkit, providing users with the flexibility to choose the best solution for their specific application needs.
Available Intel MPI Modules
Below are the available Intel MPI modules for Haswell nodes:
Haswell Nodes
- intel/mpi/32/2017/0.098
- intel/mpi/64/2017/0.098
- intel/mpi/64/2018/18.0.0
- intel/mpi/64/2018/18.0.1
- intel/mpi/64/2018/18.0.2
- intel/mpi/64/2019/19.0.0
- intel/mpi/64/2019/19.0.3
- intel/mpi/64/2019/19.0.4
- intel/mpi/64/2019/19.0.5
- intel/mpi/64/2020/20.0.0
- intel/mpi/64/2020/20.0.1
- intel/mpi/64/2020/20.0.2
Loading a Module
To use the Intel MPI module, load the appropriate version based on your application requirements. You can load the module using the module load
command. For example:
Make sure to load the version that fits your needs for optimal performance in your parallel applications.
Using Intel MPI with Intel oneAPI
Intel MPI is also available as part of the Intel oneAPI toolkit, which integrates a set of libraries, compilers, and tools designed for high-performance computing (HPC) and AI applications. If you're using Intel oneAPI, you can access Intel MPI as part of the toolkit.
Loading Intel oneAPI with MPI
To load the Intel oneAPI toolchain that includes Intel MPI, you can load the module like so:
This will load the oneAPI toolkit along with Intel MPI, allowing you to develop and execute parallel applications with MPI support.
Compiling with Intel MPI
Once you have the Intel MPI module loaded, you can compile your parallel MPI program using the mpiicc
compiler. For example:
This command will compile your program and link it with the Intel MPI library. Ensure you are using the appropriate compiler for your system architecture.
Running MPI Jobs
Once your program is compiled, you can run your MPI jobs using the mpirun
command. Here’s an example of how to run an MPI program on multiple processors:
The -np 4
option specifies that 4 processes will be run in parallel. Adjust this number based on your job's requirements and the number of available processors on your compute nodes.
Important Notes
Intel MPI provides optimized communication for HPC environments. Always select the appropriate version based on your application's needs to ensure maximum performance.
Be sure to verify your environment and dependencies to avoid compatibility issues. Incompatible versions of MPI or incorrect flags during compilation may lead to errors or performance degradation.
Additional Resources
If you need further assistance with Intel MPI or have any questions, please refer to the SeaWulf documentation or contact support for help with optimizing your use of Intel MPI.