PRACE Winter School 2011, Cyprus

CET
Nicosia, Cyprus

Nicosia, Cyprus

Description
The Winter School agenda will address the training needs of current and prospective HPC users both from Europe and the Eastern Mediterranean. During the school, researchers and students will be trained on advanced topics such as programming models and optimization techniques, specifics on MPI/OpenMP and hybrid programming, optimization and profiling. A refreshment course on modern programming languages will also be given as well as an introduction to performance analysis and debugging tools among others. The program will also include lectures on emerging HPC trends and paradigms. The Winter School will contain theoretical sessions followed by hands-on training to enable deep understanding of the HPC topics covered. Trainees will be given access to two prototype HPC systems at CaSToRC.
Slides
    • 09:00 10:30
      Registration
    • 10:30 10:35
      Welcome
    • 10:35 10:45
      Introduction and PRACE Overview
    • 10:45 11:00
      LinkSCEEM-2 Training objectives and Opportunities
    • 11:00 11:30
      Tea Break 30m
    • 11:30 13:00
      Core Skills
    • 13:00 14:00
      Lunch Break 1h
    • 14:00 16:00
      Programming Refresh

      This tutorial session is available for those who feel they need a refreshment for writing a simple C or Fortran program. The session will ensure users can write, compile and run serial programs.

    • 16:00 16:30
      Tea Break 30m
    • 16:30 17:30
      Parallel Programming Strategies

      Abstract: In "Parallel Programming Strategies", an attempt is made to provide the overview of the choices which are available to an HPC code developer, with grounding on the currently available computing technologies. The talk touches on Memory Hierarchy, Loop Unrolling, Benchmarking, Domain/Functional Decomposition, Parallel Communication etc; it explains how the courses of the CyI school fit together as subjects.

    • 09:00 10:30
      Introduction to Parallel Programming, MPI & Threading with OpenMP

      During the morning session we will start off by discussing a few basics concepts of paralllel programming. We will then move on to the description of the Message Passing Interface, a protocol widely used for distributed memory parallel programming. After going through a hands-on set of exercises we will move on to discussing the OpenMP API, which is widely adopted for doing shared memory parallel programming. The next set of hands-on exercises will include examples of mixed MPI/OpenMP codes.

    • 10:30 11:00
      Tea Break 30m
    • 11:00 13:00
      MPI/OpenMP Lab
    • 13:00 14:00
      Lunch Break 1h
    • 14:00 15:30
      Hands-on: Advanced MPI programming

      In this session, we shall review some more advanced aspects of MPI programming. Topics that shall be discussed are blocking, non-blocking and persistent communication as well as an introduction to MPI/IO. A hands on session with exercises shall follow.

    • 14:00 15:30
      Lecture: Experiences in Application Specific Supercomputer Design; Reasons, Challenges and Lessons Learned

      This lecture will provide an overview and insights into challenges of application specific supercomputers with focus on implementation and its lessons learned. We first present a short introduction in exascale computing. Then we present the reasons for such systems and describe the QPACE project in some detail. In particular, we discuss the challenges arising from combining several technologies together, using an application optimized network processor and differences between the used traffic patterns.

    • 15:30 15:45
      Tea Break 15m
    • 15:45 17:15
      Introduction to PGAS (Coarray Fortran and UPC)

      This talk discusses the PGAS concepts as they appear in the Fortran 2008 standard (Coarrays), as well as an extension to the C standards (UPC, or Unified Parallel C). After introduction of the basic PGAS features, syntax for data distribution, intrinsic functions and synchronization primitives is discussed.

    • 09:00 10:30
      Introduction to OpenCL

      This lecture will introduce the participants to the basics of OpenCL, making them able to fully understand the approach of the open standard and its API.

    • 10:30 11:00
      Tea Break 30m
    • 11:00 13:00
      Hands-on OpenCL & Introduction to PyOpenCL

      PyOpenCL is a python programming environment for the OpenCL API that serves as a convenient and relatively simple tool for parallel programing on heterogeneous systems. The main goal of this lecture is to provide an introduction to pyOpenCL, focusing on its advantages over the conventional approach with respect to code development complexity.

    • 13:00 14:00
      Lunch Break 1h
    • 14:00 15:30
      Application Performance Analysis - Tools and Techniques
    • 15:30 16:00
      Tea Break 30m
    • 16:00 17:30
      Applications Workshop & Special Interest Groups
    • 09:00 10:30
      Visualization Techniques, Part 1

      The open-source visualization software VisIt will be introduced with several parts of the tutorials presented at the SuperComputing and Visualization conferences. We will go from introductory level training to more advanced topics including in-situ visualization. After learning how to load data and the basic principles of visualization, we will explore data queries, data expressions for derived data evaluation, python scripting for batch oriented data analysis, image capture and movie making. In the more advanced session, in-situ visualization will be explained with demonstrations, coupling the visualization with running simulations. In-situ visualization is seen as a new method to interface with simulations at extreme scale.

    • 10:30 11:00
      Tea Break 30m
    • 11:00 13:00
      Visualization Techniques, Part 2
    • 13:00 14:00
      Lunch Break 1h
    • 14:00 15:30
      Debugging/Advanced Tools

      HPC users are today facing large challenges in exploiting the full capabilities of the available hardware: Software is being taken to unanticipated scales - and to unexpected architectures - which forces considerable new development for many HPC applications. The bugs that arise during such development can often only be solved with help from developer tools such as debuggers - yet these tools have also had their limits in the past and have not scaled well. This talk presents Allinea's DDT debugging tool as a solution - as the only Petascale debugger and the first debugger to offer MPI debugging for HPC systems with GPUs - and shows how it is enabling developers to tackle bugs with unprecedented ease and with a performance measured in milliseconds.

    • 15:30 16:00
      Tea Break 30m
    • 16:00 17:00
      Debugging/Advanced Tools