The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Puhti and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Puhti. The course consists of lectures and hands-on exercises on parallel programming.
After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2020.
The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.
Day 1, Wednesday 23.10
09:00-10:30 What is parallel computing 10:30-10:45 Coffee break 10:45-11:30 Introduction to MPI 11:30-12.00 Exercises 12:00-13:00 Lunch 13:00-14:00 Point-to-point communication 14:00-16:00 Exercises
Day 2, Thursday 24.10
09:00-09:45 Collective communication 09:45-10:30 Exercises 10:30-10:45 Coffee break 10:45-11:15 User-defined communicators 11:15-12:00 Exercises 12:00-13:00 Lunch 13:00-13:45 Non-blocking communication 13:45-14:15 Exercises 14:15-14:30 Coffee break 14:30-15:15 User-defined data types 15:15-16:00 Exercises
Day 3, Friday 25.10
09:00-09:45 Introduction to OpenMP 09:45-10:30 Exercises 10:30-10:45 Coffee break 10:45-11:15 Work-sharing constructs and reductions 11:15-12:00 Exercises 12:00-13:00 Lunch 13:00-13:45 Synchronization 13:45-14:30 Exercises 14:30-14:45 Coffee break 14:45-15:15 Tasks 15:15-16:00 Exercises
Jussi Enkovaara (CSC), Sami Ilvonen (CSC)
Price: Free of charge