This school is aimed at researchers who wish to learn and gain some practical experience in parallel programming using MPI and OpenMP in C/Fortran. While the school will cover many of the basic concepts of MPI and OpenMP (in Day 1 and Day 3), some of the more intermediate to advanced features of both paradigms will also be covered (e.g. parallel I/O, one-sided communications with MPI, nested OpenMP constructs, SIMD offloads, see below). Therefore, the school may be particular of interest to those who has some level of experience in parallel programming (e.g. those who had completed introductory courses in MPI/OpenMP).
Topics to be covered:
- HPC architectures and parallel computing concepts, paradigms.
- Message Passing Interface (MPI): basic anatomy of a MPI program, collective communication, synchronisation, point-to-point communication (blocking, non-blocking, synchronous, buffered), derived datatypes, error handling, group communicators and environments management, virtual topologies and neighbourhood communications, one-sided communication, binding/affinity.
- OpenMP: fundamental directives, work-sharing constructs, data clauses, synchronisation constructs, nested parallelism, dynamic threads, performance considerations, task constructs, SIMD and target directive for GPU offload.
- Pitfalls and optimisation tips for both MPI and OpenMP.
- Hybrid MPI+OpenMP parallelisation.
- Introduction to profiling tools (Intel and GNU).
- Parallel filesystems and parallel I/O.
Each participant is required to bring his/her own laptop to the course for the practical sessions. Course accounts on ICHEC's HPC system, Fionn, will be allocated during the course. Tea/coffee at breaks and lunches will be provided for the duration of the course.
- Megan Fisher
- Adam Ralph