17–19 Jul 2019
EPCC
Europe/London timezone

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

If you are not already familiar with basic Linux commands,logging on to a remote machine using ssh and compiling and running a program on a remote machine then we would strongly encourage you to also attend the Hands-on Introduction to HPC course running immediately prior to this course.

This course is free to all academics. 

Intended Learning Outcomes

On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems.

Pre-requisite Programming Languages:

Fortran, C or C++. It is not possible to do the exercises in Java.

Pre-requisite setup

Attendees should bring their own laptop (Windows/Mac/Linux) and will need the required software installed:

 

Timetable

Day 1

09:30  Message-Passing Concepts
10:15  Practical: Parallel Traffic Modelling
11:00  Break
11:30  MPI Programs
12:00  MPI on ARCHER
12:15  Practical: Hello World
13:00  Lunch
14:00  Point-to-Point Communication
14:30  Practical: Pi
15:30  Break
16:00  Communicators, Tags and Modes
16:45 Practical: Ping-Pong
17:30  Finish

Day 2

09:30  Pi Solution
10:00  Non-Blocking Communication
10:30  Practical: Message Round a Ring
11:00  Break
11:30  Collective Communicaton
12:00  Practical: Collective Communication
13:00  Lunch
14:00  Virtual Topologies
14:30  Practical: Message Round a Ring (cont.)
15:30  Break
16:00  Derived Data Types
16:45  Practical: Message Round a Ring (cont.)
17:30  Finish

Day 3

09:30  MPI Quiz
10:15  Introduction to the Case Study
10:30  Practical: Case Study
11:00  Break
11:30  Practical: Case Study (cont.)
13:00  Lunch
14:00  Designing MPI Programs
15:00 Individual Consultancy Session
15:30  Finish

Lunch and refreshments will NOT be provided but there are places locally where these can be purchased.

Slides and Exercise materials will be available shortly.

Starts
Ends
Europe/London
EPCC
The University of Edinburgh Bayes Centre 47 Potterrow Edinburgh EH8 9BT

This course is part-funded by the PRACE project and is free to all. Please register using the online form. If you have any questions, please consult the course forum page or contact support@archer.ac.uk.