[ONLINE] Introduction to High-Performance Machine Learning @SURF




Would you like to learn high-performance cutting-edge deep learning techniques?

Our online course on high-performance machine learning provides the necessary skills to train neural networks and extract the most relevant information from datasets. During our hands-on sessions you will have the opportunity to work on our high-performance systems with different types of data, and learn how to tune your model to obtain optimal results in the most efficient way. The high-performance machine learning team at SURFsara will guide you online during the presentations and exercises and indicate how to start applying machine learning to your projects.


In this course you will:

- Understand the fundamental theories of machine learning and the intuitions/ideas behind the algorithms

- Work with a high-level machine learning API (Keras)

- Explore hyperparameter space to improve a neural network

- Understand the pitfalls of classic machine learning algorithms

- Upscale large machine learning models with parallel training on a supercomputer


- Everyone interested in getting familiar with machine learning at scale, from the beginning up to more advanced topics


- Basic knowledge on statistics

- Basic knowledge on general programming. Some experience with Python and the use of Jupyter Notebooks is desirable.

- Basic knowledge on parallel computing. No specific experience with supercomputing systems is necessary.

You should have:

- Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.

    • Welcome & Introduction
    • Introduction to Neural Networks
    • 1
      Hands-on: Neural Networks with MNIST
    • 10:45 AM
      Coffee break
    • Neural Networks - knobs and dials
    • Hands-on: Neural Networks - hyperparameter tuning for optimizing the MNIST prediction
    • 12:00 PM
      Lunch break
    • Introduction to CNNs, RNNs, and generative models
    • Hands-on: CNNs with CIFAR
    • 3:00 PM
      Coffee break
    • 2
      DNN inspection and result Interpretation
    • Open discussion
    • Introduction to Parallel Computing
    • Parallel Computing for Deep Learning: basic ideas, algorithms, frameworks, and hardware bottlenecks
    • 10:45 AM
      Coffee break
    • Structure of Deep Learning Frameworks: computational graph, autodiff, and optimizers
    • 3
      Hands-on: Profiling TensorFlow with TensorBoard
    • 12:30 PM
      Lunch break
    • Hands-on: Data Parallelism with Horovod (CIFAR10)
    • 3:00 PM
      Coffee break
    • Introduction to Hybrid parallelism
    • Open discussion