Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.
In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.
This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF (NetCDF4 and PnetCDF) and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.
Prerequisites: Experience in parallel programming with MPI, and either C/C++ or Fortran in particular.
Registrations are only considered until 1 March 2018 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.
Instructors: Sebastian Lührs, Dr. Michael Stephan, Benedikt Steinbusch, Dr. Kay Thust, JSC
For any questions concerning the course please send an e-mail to firstname.lastname@example.org.
Accomodation in Jülich:
Participants are responsible for booking their own hotel accommodation.
Hotel suggestions can be found on the webpage "Travel information and access to Jülich Supercomputing Centre" at JSC.