Feb 17 – 19, 2020
Jülich Supercomputing Centre
CET timezone

The course offers basics of analyzing data with machine learning and data mining algorithms in order to understand foundations of learning from large quantities of data. This course is especially oriented towards beginners that have no previous knowledge of machine learning techniques. The course consists of general methods for data analysis in order to understand clustering, classification, and regression. This includes a thorough discussion of test datasets, training datasets, and validation datasets required to learn from data with a high accuracy. Easy application examples will foster the theoretical course elements that also will illustrate problems like overfitting followed by mechanisms such as validation and regularization that prevent such problems.

The tutorial will start from a very simple application example in order to teach foundations like the role of features in data, linear separability, or decision boundaries for machine learning models. In particular this course will point to key challenges in analyzing large quantities of data sets (aka ‘big data’) in order to motivate the use of parallel and scalable machine learning algorithms that will be used in the course. The course targets specific challenges in analyzing large quantities of datasets that cannot be analyzed with traditional serial methods provided by tools such as R, SAS, or Matlab. This includes several challenges as part of the machine learning algorithms, the distribution of data, or the process of performing validation. The course will introduce selected solutions to overcome these challenges using parallel and scalable computing techniques based on the Message Passing Interface (MPI) and OpenMP that run on massively parallel High Performance Computing (HPC) platforms. The course ends with a more recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency.

Prerequisites:
Knowledge on job submissions to large HPC machines using batch scripts, knowledge of mathematical basics in linear algebra helpful.

Participants should bring their own notebooks (with an ssh-client).

Learning outcome:
After this course participants will have a general understanding how to approach data analysis problems in a systematic way. In particular this course will provide insights into key benefits of parallelization such as during the n-fold cross-validation process where significant speed-ups can be obtained compared to serial methods. Participants will also get a detailed understanding why and how parallelization provides benefits to a scalable data analyzing process using machine learning methods for big data and a general understanding for which problems deep learning algorithms are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Participants will learn that deep learning can actually perform ‘feature learning’ that bears the potential to significantly speed-up data analysis processes that previously required much feature engineering.

Course slides from the last training in February 2019 can be found at

http://www.morrisriedel.de/prace-tutorial-parallel-and-scalable-machine-learning 

Application
Applicants will be notified one month before the course starts, whether they are accepted for participitation.

Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre

Contact
For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de.

Starts
Ends
CET
Jülich Supercomputing Centre
Rotunda, building 16.4, r. 301
Forschungszentrum Jülich Jülich Supercomputing Centre 52425 Jülich Germany

Accomodation in Jülich:

Participants are responsible for booking their own hotel accommodation. Hotel suggestions can be found on the webpage "Travel information and access to Jülich Supercomputing Centre" at JSC.