As stipulated by the Bavarian State Ministry of Science and the Arts on 10 March 2020, we have to cancel or postpone all upcoming on-site courses and workshops at LRZ until – as of now – 20 April 2020.
We have joined forces with VSC Vienna, HLRS Stuttgart and RRZE Erlangen and will offer this course online on 17-19 June 2020, see https://events.prace-ri.eu/event/1009/ for details and registration.
Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.
Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. The course is a PRACE training event. It is organized by LRZ in cooperation with HLRS, RRZE, and VSC (Vienna Scientific Cluster).
Agenda & Content (preliminary)
10:45 Programming Models
10:50 - MPI + OpenMP
11:30 Coffee Break
11:50 - continue: MPI + OpenMP
12:40 Practical (how to compile and start)
14:40 Practical (continued)
15:00 Practical (hybrid through OpenMP parallelization)
16:00 Coffee Break
16:20 - Overlapping Communication and Computation
16:40 Practical (taskloops)
17:20 - MPI + OpenMP Conclusions
17:30 - MPI + Accelerators
18:00 End of first day
19:00 Social Event at Gasthof Neuwirt (self paying)
09:00 Programming Models (continued)
09:05 - MPI + MPI-3.0 Shared Memory
09:45 Practical (replicated data)
10:30 Coffee break
10:50 continue: Practical (replicated data)
11:50 - MPI Memory Models and Synchronization
13:30 - Pure MPI
13:50 - Topology Optimization
14:30 Coffee Break
14:50 Practical (application aware Cartesian topology)
15:45 - Topology Optimization (Wrap up)
16:15 Q & A
16:30 End of second day (course)