The Hadoop ecosystem is the leading opensource platform for distributed storing and processing big data. It is a very popular system for implementing data warehouses and data lakes. Spark has also emerged to be one of the leading engines for data analytics. The Hadoop platform is available at CERN as a central service provided by the IT department.
By attending the session, a participant will acquire knowledge of the essential concepts need to benefit from the parallel data processing offered by Spark framework. The session is structured around practical examples and tutorials.
Main topics:
Architecture overview - work distribution, concepts of a worker and a driver
Computing concepts of transformations and actions
Data processing APIs - RDD, DataFrame, and SparkSQL
</ul