This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including exploiting shared-memory access from MPI programs, communicator management and neighbourhood collectives. We also look at performance aspects such as which MPI routines to use for scalability, MPI internal implementation issues and overlapping communication and calculation. Intended learning outcomes
- Understanding of how internal MPI implementation details affect performance
- Techniques for overlapping communications and calculation
- Knowledge of MPI memory models for RMA operations
- Understanding of best practice for MPI+OpenMP programming
- Familiarity with neighbourhood collective operations in MPI
Prerequisites:
Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER2 MPI course.
Requirements:
Participants must have a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on.
They are also required to abide by the ARCHER2 Code of Conduct.