[ONLINE] Parallel Programming Workshop @ BSC


The registration to this course is closed. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

Course Convener: Xavier Martorell

Sessions will be in October 13th-16th and 19th-22nd from 2pm to 5.30pm and delivered via Zoom.

Level: Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

Attendants can bring their own applications and work with them during the course for parallelization and analysis.

Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

Software requirements: Zoom (recommended), SSH client (to connect HPC systems), X Server (enabling remote visual tools).

Objectives: The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

Attendants can bring their own applications and work with them during the course for parallelization and analysis.

Learning Outcomes: The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.


all times are 2pm to 5.30pm with two breaks of 15'

Tuesday 13/10/2020
1. Introduction to parallel architectures, algorithms design and performance parameters
2. Introduction to the MPI programming model
3. Practical: How to compile and run MPI applications

Wednesday 14/10/2020
1. Introduction to Paraver: tool to analyze and understand performance
2. Practical: Trace generation and trace analysis

Thursday 15/10/2020
1. MPI: Point-to-point communication, collective communication
2. Practical: Simple matrix computations
3. MPI: Blocking and non-blocking communications
4. Practical: matrix computations with non-blocking communication

Friday 16/10/2020
1. MPI: Collectives, Communicators, Topologies
2. Practical: Heat equation example

Monday 19/10/2020
1. OpenMP Fundamentals: the fork-join model (lecture)
2. OpenMP Fundamentals: the fork-join model (hands-on)
3. OpenMP Fundamentals: the data environment (lecture)
4. OpenMP Fundamentals: the data environment (hands-on)

Tuesday 20/10/2020
1. OpenMP Work-sharing: distributing work among threads (lecture) 
2. OpenMP Work-sharing: distributing work among threads (hands-on) 
3. OpenMP Work-sharing: loop distribution (lecture) 
4. OpenMP Work-sharing: loop distribution (hands-on) 

Wednesday 21/10/2020
1. OpenMP Tasking model: basics (lecture)
   The task construct
   The taskwait
2. OpenMP Tasking model: basics (hands-on)
3. OpenMP Tasking model: intermediate (lecture)
4. OpenMP Tasking model: intermediate (hands-on)

Thursday 22/10/2020
1. Hybrid MPI+OpenMP
   Standard (threading level, synchronous/asynchronous MPI)
2. Practical: Heat, nbody

End of Course

The agenda of this meeting is empty