With the rapid growth in data volume that is being used in data analysis tasks, it gets more and more challenging for the user to process it using standard methods. Enter Spark, a high-performance distributed computing framework, which allows us to tackle big-data problems by distributing the workload across a cluster of machines.
This two day course addresses the technical architechture and use cases of Spark, setting it up for your work, best practices and programming aspects. The first day includes the overview, architechtural concepts and programming with Spark's fundamental data structure (RDD). The second day focuses on the SQL module of Spark, which allows the user to analyse data using Spark's distributed collection (Dataframes) by using the traditional SQL queries.
After this course you should be able to write simple to intermediate programmes in Spark using RDD and dataframes/SQL.
Basic knowledge on programming in general is recommended (ideally, Python).
Please NOTE: This is not a regular programming course, the participants would be expected to learn emerging concepts in the field of big data / distributed processing, which might be completely different from the concepts of a general progamming language.
Day 1, Thursday 16.11
Day 2, Friday 17.11
Apurva Nandan (CSC), Teaching Assistant: Tommi Jalkanen (CSC)
Language: English
Price: Free of charge