Nov 6 – 8, 2018
University of Oxford
Europe/London timezone

If you were given a serial problem, conceptually, how would you go about splitting it up into many different parts that could run concurrently on the latest parallel computers?

The good news is that you don't need to reinvent the wheel. Instead, there are many different approaches (called parallel patterns) that have been developed by the community and can be used in a variety of situations. These patterns apply equally well regardless of whether your problem is computational or data-driven.

Understanding and being able to apply these patterns also helps in getting to grips with existing parallel codes and optimising poorly performing computation and data codes. Whilst the lectures take a top down approach, focusing on the patterns themselves, the practical exercises give the opportunity to explore the concepts by implementing pattern-based solutions to problems using common HPC technologies.

The parallel patterns (known as a pattern language) that we cover are split into two categories.

The closest to the problem area (and most abstract) are parallel algorithm strategy patterns and include:

  • Task Parallelism
  • Recursive Splitting
  • Geometric Decomposition
  • Pipeline
  • Discrete Event
  • Actors

The other category of patterns is closer to the implementation and drives how the programmer should structure their code and data. These are implementation strategy patterns, and include:

  • Master/Worker
  • Loop Parallelism
  • Fork/Join
  • Shared Data and Queues
  • Active Messaging

Patterns are described on an abstract level and we will also discuss enhancements that can be made to improve performance/scalability but at the cost of code complexity. Practical implementations of these patterns are explored in depth in the hands-on exercises.

Programming exercises use C and Fortran, with MPI and OpenMP.


Nick Brown

Nick is involved with the MSc in High Performance Computing, is the course organiser for the Parallel Design Patterns module and also supervises student dissertation projects.

Course Pre-requisites

  • Ability to program in C, C++ or Fortran.
  • Familiarity with using MPI
  • Some familiarity with OpenMP is beneficial but not essential

Pre-course setup

All course delegates will need to bring a wireless enabled laptop computer with them on the course. If you have an EduRoam account please ensure this is set up beforehand.

Practical exercises will be done using a guest account on ARCHER. You will need to set up your laptop before the course with the required software. Setup information is available at for Windows, Mac and Linux.

Learning outcomes

On completion of this course students should be able to:

  • Recognise different strategies for structuring the parallelism of a specific problem in hand
  • Understand the trade-offs between different approaches to, and specialisations of, parallelisation
  • Identify the most appropriate ways to structure code and data with respect to the parallel strategy adopted

Course Materials

Course materials page

Course Chat


University of Oxford
Conference Room
Oxford e-Research Centre, 7 Keble Road, Oxford, OX1 3QG <a href="">Directions</a>
This course is part-funded by the PRACE project and is free to all. Please register using the online form. If you have any questions, please consult the course forum page or contact