GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications.
Topics covered will include:
- Introduction to GPU/Parallel computing
- Programming model CUDA
- GPU libraries like CuBLAS and CuFFT
- Tools for debugging and profiling
- Performance optimizations
Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell, experience in C/C++
Registrations are only considered until 31 March 2018 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.
Instructors: Dr. Jan Meinke, Jochen Kreutz, Dr. Andreas Herten, JSC; Jiri Kraus, NVIDIA
For any questions concerning the course please send an e-mail to firstname.lastname@example.org