Writing Scalable Parallel Applications with MPI
Learning the basic MPI syntax and writing small example programs can be relatively straightforward, but many questions only arise when you first tackle a large-scale parallel application. Typical topics include how best to avoid deadlock, overlapping communication and calculation, understanding performance, debugging strategies or parallel IO.
This hands-on course is an opportunity to learn how best to use MPI based on the experiences of the ARCHER CSE team at EPCC.
Rather than covering advanced MPI functions, it will focus on the practicalities of using MPI effectively for large-scale parallel scientific applications. It will also cover the most common mistakes and misconceptions that occur in MPI programs.
Monday December 12th
09:30 Lecture: Introduction
09:45 Lecture: MPI Quiz (i)
10:30 Practical: log on and run a test job
11:30 Lecture: MPI history and internal design
12:15 Practical: ping-pong exercise
14:00 Lecture: scaling behaviour and synchronisation issues
15:00 Practical: traffic model thought experiment
16:00 MPI optimisation techniques
16:45 Practical: traffic model
Tuesday December 13th
09:30 Lecture: Performance modelling
09:45 Lecture: MPI Datatypes
10:15 Practical: MPI Datatypes
11:30 Lecture: Performance tools: Scalasca + Craypat
12:00 Practical: CFD model
14:00 Communicator management
14:45 MPI Quiz (ii)
16:00 Individual consultancy session