The EuroHPC Summit Week (EHPCSW) 2020 will gather the main European HPC stakeholders from technology suppliers and HPC infrastructures to scientific and industrial HPC users in Europe. As in previous years, PRACE, the Partnership for Advanced Computing in Europe, organises the seventh edition of its Scientific and Industrial Conference (PRACEdays20) within the EHPCSW 2020. PRACEdays20 will bring together experts from academia and industry who will present their advancements in HPC-supported science and engineering. The EHPCSW 2020 will provide a great opportunity for the attendees to network.
The 2020 edition of the EuroHPC Summit Week has been cancelled. |
The main organisers of the EHPCSW 2020 are the European Extreme Data & Computing Initiative (EXDCI-2), the Partnership for Advanced Computing in Europe (PRACE), and the European Technology Platform for High-Performance Computing (ETP4HPC). The European Commission (EC) will represent the EuroHPC Joint Undertaking in the organisation of the conference. The logistical organisation is supported by a local host: for the 2020 edition this will be Fundação para a Ciência e a Tecnologia - Computação Cientifica Nacional (FCT-FCCN).
Materials design at the exascale: success cases using HPC and HTC
Performance portability of legacy scientific codes on HPC architectures, co-design, and energy efficiency
In today's data-driven world, High-Performance Computing (HPC) is an emerging reference platform that drives scientific research and enables industrial innovations. This is articularly true for the research in Materials Science, in which, by simply applying the equations of quantum mechanics in large HPC calculations, scientists are able to study and design new materials, before running actual experiments, decreasing costs and enhancing performance.
MaX CoE – ‘Materials design at the eXascale’ - is devoted to enable materials modelling, simulations, discovery and design at the frontiers of the current and future pre-exascale and exascale HPC architectures.
MaX workshop, that gathers scientists and organisations active in the field of materials modelling, aims at discussing the performance and portability of MaX flagship codes (Quantum ESPRESSO, Yambo, Siesta, Fleur, CP2K, BigDFT and AiiDA) and the recent advances in computational materials research based on quantum physics and electronic structure methods.
In particular we will focus on:
● advances in high-performance computing for materials science,
● high throughput computing for materials discovery,
● new avenues from data analytics/artificial intelligence in materials science,
● trends in high-performance computing and codesign towards exascale,
● energy efficiency strategy in HPC systems
● novel algorithms for first-principles simulations.
Furthermore, some success cases of materials simulations will be presented.
Machine/Deep Learning (ML/DL) and Artificial Intelligence (AI) have evolved over the last years as disruptive technologies and are extensively used by several distinct communities to develop scientific and technological approaches, targeting different application domains. Today a trend is emerging toward a convergence between ML/DL, AI and HPC with high benefits expected from this cross-fertilization, in terms of hardware, software and applications.
The aim of the workshop is to provide concrete feedback about how deep AI has permeated into the HPC ecosystem and viceversa.
The workshop will focus on the convergence between HPC and AI, encouraging analysis and discussion in particular, for a deeper understanding of the reasons why HPC and HTC need AI approaches and why AI needs HPC. To give just some examples, HPC & HTC need AI approaches (I) for inferring data flows from large scale scientific instruments, to better manage stream access and support end to end workflows, (II) for hybrid modelling, based on combining deterministic modelling and machine learning components and in general (III) for coupling learnt models and simulation codes toward cognitive simulation, (IV) for in-situ and in-transit) post processing of numerical simulations, to optimise data movement and minimise energy, (V) for better exploiting systems and computing centres, to improve security, have a preventive maintenance, optimise the infrastructure and develop AI driven schedulers. At the same time, AI needs HPC, (I) to scale up the learning phase of NN/DL networks, due to the availability of the huge amount of data, (II) to develop Auto DL/ML networks and in general Auto AI, allowing auto-tuning of the choice of models, (III) to use federated/transfer learning solutions, (IV) to develop new AI methods, such as eXplainable Artificial intelligence (XAI) approaches.
The workshop will involve representatives from FETHPC projects as well as from the Center of Excellence (CoEs) working in the area of AI and HPC convergence. Link to the CoEs will be established via FocusCoE representatives as done in the previous EXDCI2 Workshop on HPDA (2019). Moreover, other initiatives like BDVA and AI4EU will be taken into consideration due to their role in the overall European landscape on the workshop topics.
After the end of the workshop, the organizing committee will evaluate the possibility to prepare a journal contribution titled “A survey about EU key initiatives and efforts on HPC & AI convergence”, based on the gathered workshop feedback and results as well as on the speakers availability to join the editorial team of the manuscript.
Today HPC system architecture is dominated by standard CPU+GPU solution. This architecture has been effective to propose the performance increase requested by HPC users while challenging them to exploit the massive parallelism and heterogeneity offered by this solution. We foresee little changes in the 2020-2023 time frame with the first exascale systems based on this approach. After to sustain the growth of number of operations per watt, new solutions will have to be found as the Moore’s law will be fading and Denard’s scaling gone.
Progresses can be made in three axis:
Most of the new approaches is a combination of the three (or at least of two of them). The workshop will introduce the potential technologies paths and then focus on three technologies that could be integrated in HPC systems in the coming years. A discussion with the audience will conclude the workshop.
The EuroHPC state of play session will be the meeting point for participants interested to get updated on the JU’s activities in the past period and what lies ahead in the coming months. It will cover both the Infrastructure and Research and Innovation pillars of the JU. The tightly packed session includes presentations from the EuroHPC officers on the currently running procurements, the R&I calls closed in 2019 and the upcoming Work Programme 2020 calls. The chairs of Research and Innovation (RIAG) and Infrastructure (INFRAG) Advisory Groups will share their experience and vision for EuroHPC. INFRAG will also report on their preparatory work on the definition of the Access Policy for the EuroHPC supercomputers currently under procurement. Finally, the Hosting Entities of the 3 pre-exascale EuroHPC systems (LUMI, MN5 and Leonardo) will provide an overview of the system architectures, the target application domains and their planned deployment timeline.
JU state of play. 2019 Call for Tenders and R&I Calls status
Upcoming calls in WP2020
RIAG state-of-play
INFRAG state-of-play
Findings of the INFRAG Access Policy Working Group
LUMI supercomputer report
Marenostrum5 supercomputer report
Leonardo supercomputer
Materials design at the exascale: success cases using HPC and HTC
Performance portability of legacy scientific codes on HPC architectures, co-design, and energy efficiency
In today's data-driven world, High-Performance Computing (HPC) is an emerging reference platform that drives scientific research and enables industrial innovations. This is articularly true for the research in Materials Science, in which, by simply applying the equations of quantum mechanics in large HPC calculations, scientists are able to study and design new materials, before running actual experiments, decreasing costs and enhancing performance.
MaX CoE – ‘Materials design at the eXascale’ - is devoted to enable materials modelling, simulations, discovery and design at the frontiers of the current and future pre-exascale and exascale HPC architectures.
MaX workshop, that gathers scientists and organisations active in the field of materials modelling, aims at discussing the performance and portability of MaX flagship codes (Quantum ESPRESSO, Yambo, Siesta, Fleur, CP2K, BigDFT and AiiDA) and the recent advances in computational materials research based on quantum physics and electronic structure methods.
In particular we will focus on:
● advances in high-performance computing for materials science,
● high throughput computing for materials discovery,
● new avenues from data analytics/artificial intelligence in materials science,
● trends in high-performance computing and codesign towards exascale,
● energy efficiency strategy in HPC systems
● novel algorithms for first-principles simulations.
Furthermore, some success cases of materials simulations will be presented.
Machine/Deep Learning (ML/DL) and Artificial Intelligence (AI) have evolved over the last years as disruptive technologies and are extensively used by several distinct communities to develop scientific and technological approaches, targeting different application domains. Today a trend is emerging toward a convergence between ML/DL, AI and HPC with high benefits expected from this cross-fertilization, in terms of hardware, software and applications.
The aim of the workshop is to provide concrete feedback about how deep AI has permeated into the HPC ecosystem and viceversa.
The workshop will focus on the convergence between HPC and AI, encouraging analysis and discussion in particular, for a deeper understanding of the reasons why HPC and HTC need AI approaches and why AI needs HPC. To give just some examples, HPC & HTC need AI approaches (I) for inferring data flows from large scale scientific instruments, to better manage stream access and support end to end workflows, (II) for hybrid modelling, based on combining deterministic modelling and machine learning components and in general (III) for coupling learnt models and simulation codes toward cognitive simulation, (IV) for in-situ and in-transit) post processing of numerical simulations, to optimise data movement and minimise energy, (V) for better exploiting systems and computing centres, to improve security, have a preventive maintenance, optimise the infrastructure and develop AI driven schedulers. At the same time, AI needs HPC, (I) to scale up the learning phase of NN/DL networks, due to the availability of the huge amount of data, (II) to develop Auto DL/ML networks and in general Auto AI, allowing auto-tuning of the choice of models, (III) to use federated/transfer learning solutions, (IV) to develop new AI methods, such as eXplainable Artificial intelligence (XAI) approaches.
The workshop will involve representatives from FETHPC projects as well as from the Center of Excellence (CoEs) working in the area of AI and HPC convergence. Link to the CoEs will be established via FocusCoE representatives as done in the previous EXDCI2 Workshop on HPDA (2019). Moreover, other initiatives like BDVA and AI4EU will be taken into consideration due to their role in the overall European landscape on the workshop topics.
After the end of the workshop, the organizing committee will evaluate the possibility to prepare a journal contribution titled “A survey about EU key initiatives and efforts on HPC & AI convergence”, based on the gathered workshop feedback and results as well as on the speakers availability to join the editorial team of the manuscript.
PRACE has recently engaged in the coordination of European HPC services and activities through a series of events and workshops. This includes access to HPC systems, HPC user support, training in HPC, HPC policy, HPC technology development, HPC operations and dissemination.
The objective of this session is to present and discuss the final conclusions of this initiative with the key stakeholders. The conclusions from this session will be used to structure the new "HPC in Europe" services portal that will collect the results from this initiative, with a special focus on User Support and Training.
Answering the policies of the European Comission ('High Performance Computing: Europe's place in a Global Race', followed by 'European Cloud Initiative - Building a competitive data and knowledge economy in Europe'), the European HPC stakeholders have recently engaged in the coordination of their HPC services and activities, with the objective to set the role and responsibilities of each major European HPC actor with regards to Access to HPC systems, HPC user support, Training in HPC, HPC policy, HPC technology development, HPC operations and dissemination. To this end, a working group led by PRACE distributed a self-evaluation survey to more than 80 institutions, in order to identify their services and competences in HPC. The first analysis of the results was presented and discussed during the EuroHPC Summit Week 2019 in Poznan.
In this session, the final conclusions of this working group will be presented and discussed, including a proposal for the boundaries and responsibilities in Support to HPC users and Training in HPC among PRACE, the European Centres of Excellence in HPC and other actors. PRACE will organize this BoF and invite the relevant stakeholders to controbute.
The ultimate goal of this session will be to analyse colectively the conclusions of the working group, which will define the structure of the new "HPC in Europe" services portal. The new HPC in Europe portal will provide a framework to collect the complete catalogue of HPC services throughout Europe, following a user-driven and audience-oriented approach. A white paper collecting the conclusions of this BoD will be developed and published to share the results with the stakeholders.
Agenda:
1) The European HPC Ecosystem Follow-Up Report – Approval of this document
2) Live demo of the HPC in Europe portal
3) Discuss improvements
4) Collect new ideas
The actual versions of the European HPC Ecosystem Follow-Up Report and the HPC in Europe portal will be provided in advance in order to support the discussion and feedback will be requested.
Short session, as part of the collaboration activities between HiPEAC, Eurolab4HPC, PRACE university programme and EXDCI to help students from the poster session to present their research. In this session students will be briefly introduced how to prepare and structure a short pitch and will practice to present their research across different audiences.
Pre-Requisites: No technical pre-requisite. No-poster needed. Best mood to practice, discuss and contribute
The PRACE User Forum provides a communication channel between PRACE and the researchers and users involved in PRACE computational projects. Its aim is to identify generic issues and needs that users encounter during all steps related to computational projects awarded by PRACE. The yearly general assembly is held during PRACEdays.
Europe will soon have a new infrastructure of accelerator-based pre-exascale machines. The challenge is to assist in transitioning the many research groups in Europe that deliver excellent science and development for the industry but rely on traditional multi-core architecture. In the open session, we will give a user perspective on HPC and the challenges that we see in the near future and provide a forum where users of the PRACE infrastructure can voice their opinion and bring up issues. The open discussion will also include a discussion of the Peer Review process by Maria Grazia Giuffreda (tbc) and Oriol Pineda (tbc).
Learn how to use Tensorflow and Keras to build own Deep Neural Networks (DNNs) and train them in the HPC realm. We will explain how DNNs work in a nutshell and provide hands-on examples you can use as starting point after the course for your own projects.
Furthermore, we will discuss the differences to train on different HPC architectures (CPU and GPU). We will also give you an overview on the metrics used to evaluate the performance of DNNs and share best known methods with you on how to prepare training data sets for best performance.
In this workshop you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. In this workshop, you will learn:
How to profile and optimise your CPU-only applications to identify hot spots for acceleration
How to use OpenACC directives to GPU accelerate your codebase
How to optimise data movement between the CPU and GPU accelerator
Upon completion, you will be ready to use OpenACC to GPU accelerate CPU-only applications. The lectures are interleaved with many hands-on sessions using Jupyter notebooks on fully configured GPU-accelerated cloud resources.
The workshop is co-organised by LRZ, PRACE and NVIDIA Deep Learning Institute (DLI).
This session is meant to raise awareness and exchange experiences and opinions. Instead of trying to come up with superficial or cosmetic solutions, the idea is to make people think and discuss. Diversity is about ideas, not about demographic characteristics. Inclusivity is about respect and openness. In addition to focusing on diversity, organisations need to create inclusive environments for employees to feel comfortable bringing their authentic selves to work. For instance, by using strategic using language that says “welcome, you belong here”. Not “enter at your risk” or “those who enter here, abandon all authenticity”.
Has your organisation embraced diversity (also known as a colour-blind ideology) or strongly embraced differences (also known as a multicultural ideology)? Is it enough to be respectful and open, or does a safe working environment, where everyone can be their authentic self (at feel “at home”), need more than that?
Moderator: Marjolein Oorsprong, PRACE aisbl
The last 20 minutes of the session will be used to do a shoe shuffle discussion, where the panel members react to statements by moving to the left (disagree) or right (agree) of the discussion leader. They are then asked to comment on their “movement” and in reaction to this the others can then move again. The audience can participate after every statement. The discussion leader can ask members of the audience to comment as well. The idea is not to have an “adversarial discussion” (some are right, others are wrong), but a discussion that allows “argument repair” where the proponents of a position revise their argument in response to criticism. We are not looking for truth, we are looking for wisdom.
Several high-impact phenomena require of on demand solutions of HPC applications under strong time constrains. Examples include, among other, affectations from geohazards (e.g. propagation of tsunami waves, earthquakes, volcanic eruptions), atmospheric or ocean toxic dispersal (e.g. incidental nuclear release, pathogen emission), or extreme weather events (e.g. triggering large flooding). Prompt reaction to these scenarios requires tier-0 computing infrastructures, complicated data workflows, and engagement with stakeholders formally involved in emergency management (e.g. the European Emergency Response Coordination Centre; ERCC) through shared protocols and policies. Interest to set up Urgent Computing (UC) services in Europe is growing trough the ChEESE CoE, PRACE and also recommendations of the EuroHPC INFRAG Access Policy Group.
Following the Workshop about Urgent HPC held during SC19, we would like to organize a similar one with European HPC stakeholders based on the need for these services in Europe. This half-day workshop would have around 5-6 talks of 20 minutes each, including other European HPC-related initiatives such as the ESiWACE CoE, VESTEC and Lexis projects. The event will finish with an interactive discussion panel on the subject of the need for HPC in urgent decision making. The feedback will be reported to the EC in a joint report or common white paper by various parties involved in this European workshop.
PRACE recently engaged in the coordination of European HPC services and activities through a series of events and workshops. This includes access to HPC systems, HPC user support, training in HPC, HPC policy, HPC technology development, HPC operations and dissemination. The objective of this session is to present and discuss the final conclusions of this initiative with the key stakeholders, including the new "HPC in Europe" services portal, arising from these coordination efforts.
Answering the policies of the European Commission ('High Performance Computing: Europe's place in a Global Race', followed by 'European Cloud Initiative - Building a competitive data and knowledge economy in Europe'), the European HPC stakeholders have recently engaged in the coordination of their HPC services and activities, with the objective to set the role and responsibilities of each major European HPC actor with regards to Access to HPC systems, HPC user support, Training in HPC, HPC policy, HPC technology development, HPC operations and dissemination. To this end, a working group led by PRACE distributed a self-evaluation survey to more than 80 institutions, in order to identify their services and competences in HPC. The first analysis of the results was presented and discussed during the EuroHPC Summit Week 2019 in Poznan. A follow-up session was held in the form of a BoF during SC19.
In this session, the final conclusions of this working group will be presented and discussed, including a proposal for the boundaries and responsibilities in Support to HPC users and Training in HPC among PRACE, the European Centres of Excellence in HPC and other actors. The ultimate goal of this session will be to analyse collectively the conclusions of the working group and engage the utilisation of the new HPC in Europe portal as a framework to collect the complete catalogue of HPC services throughout Europe, following a user-driven and audience-oriented approach.
The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. Experience C/C++ application acceleration by:
Accelerating CPU-only applications, and refactoring them to run in parallel on GPUs
Utilizing essential CUDA memory management techniques to optimise accelerated applications
Exposing accelerated application potential for concurrency and exploiting it with CUDA streams
Leveraging command line and visual profiling to guide and check your work
Upon completion, you will be able to accelerate and optimise existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You will understand an iterative style of CUDA development that will allow you to ship accelerated applications fast. The lectures are interleaved with many hands-on sessions using Jupyter notebooks on fully configured GPU-accelerated cloud resources.
The workshop is co-organised by LRZ, PRACE and NVIDIA Deep Learning Institute (DLI).
Learn how to use the most important Machine Learning (ML) methods scikit-learn offers, such as classification, clustering, regression, dimensionality reduction and visualization.
For every category, we will provide hands-on examples from different problem domains. We also visualize and explain the most important ML methods to help you decide where to use them for your own projects. Last but not least, we show you what's needed to run ML with scikit-learn efficiently on HPC architectures.
Several high-impact phenomena require of on demand solutions of HPC applications under strong time constrains. Examples include, among other, affectations from geohazards (e.g. propagation of tsunami waves, earthquakes, volcanic eruptions), atmospheric or ocean toxic dispersal (e.g. incidental nuclear release, pathogen emission), or extreme weather events (e.g. triggering large flooding). Prompt reaction to these scenarios requires tier-0 computing infrastructures, complicated data workflows, and engagement with stakeholders formally involved in emergency management (e.g. the European Emergency Response Coordination Centre; ERCC) through shared protocols and policies. Interest to set up Urgent Computing (UC) services in Europe is growing trough the ChEESE CoE, PRACE and also recommendations of the EuroHPC INFRAG Access Policy Group.
Following the Workshop about Urgent HPC held during SC19, we would like to organize a similar one with European HPC stakeholders based on the need for these services in Europe. This half-day workshop would have around 5-6 talks of 20 minutes each, including other European HPC-related initiatives such as the ESiWACE CoE, VESTEC and Lexis projects. The event will finish with an interactive discussion panel on the subject of the need for HPC in urgent decision making. The feedback will be reported to the EC in a joint report or common white paper by various parties involved in this European workshop.
The session provides an overview over the Fenix architecture, available and upcoming resources as well as the resource access mechanisms. Details on how to request resources are also provided. The session will show that Fenix enables science, as highlighted by lightning talks from users of the Fenix infrastructure.
PRACE recently engaged in the coordination of European HPC services and activities through a series of events and workshops. This includes access to HPC systems, HPC user support, training in HPC, HPC policy, HPC technology development, HPC operations and dissemination. The objective of this session is to present and discuss the final conclusions of this initiative with the key stakeholders, including the new "HPC in Europe" services portal, arising from these coordination efforts.
Answering the policies of the European Commission ('High Performance Computing: Europe's place in a Global Race', followed by 'European Cloud Initiative - Building a competitive data and knowledge economy in Europe'), the European HPC stakeholders have recently engaged in the coordination of their HPC services and activities, with the objective to set the role and responsibilities of each major European HPC actor with regards to Access to HPC systems, HPC user support, Training in HPC, HPC policy, HPC technology development, HPC operations and dissemination. To this end, a working group led by PRACE distributed a self-evaluation survey to more than 80 institutions, in order to identify their services and competences in HPC. The first analysis of the results was presented and discussed during the EuroHPC Summit Week 2019 in Poznan. A follow-up session was held in the form of a BoF during SC19.
In this session, the final conclusions of this working group will be presented and discussed, including a proposal for the boundaries and responsibilities in Support to HPC users and Training in HPC among PRACE, the European Centres of Excellence in HPC and other actors. The ultimate goal of this session will be to analyse collectively the conclusions of the working group and engage the utilisation of the new HPC in Europe portal as a framework to collect the complete catalogue of HPC services throughout Europe, following a user-driven and audience-oriented approach.
Scientific Keynote
Panel Brief
Scientific and industrial research is taking ownership of the revolution, started a decade ago by online enterprises such as Google and Facebook, of data-centric discovery as a complement to the simulation-centric approach traditional to the HPC community. While data processing has always been at the heart of scientific discoveries relying on large-scale instruments, the need to increase the performance of the associated data processing with extreme computing and the interdependence of accurate simulations and efficient design of experiments, has become more pressing. The advent of data science resulting from the accessibility of new huge sets of data of unprecedented detail, coming from traditional research as well as from new sources such as the Internet of Things, and the ability to extract information from these efficiently also extended the scientific communities benefitting from the use of large cyberinfrastructures with, for instance, social sciences and humanities. This convergence of interests comes with the challenge of providing an infrastructure that is suited for this wider range of research topics, and that supports new discoveries efficiently. Architectures of exascale computers relying on computing accelerators are a key element in this evolving landscape in providing, for instance, efficient platforms for AI supported research. The challenge for the decade to come is to enable researchers to leverage value of data from the edge where a significant part of the data needs to be collected, cured and filtered, to the data centre where it can be further processed at extreme scale. This is associated with the need to train researchers to use new transverse disciplines such as AI, machine learning, and data-mining, and to provide them with tools that manage the associated data logistics and implement large scale workflows across the future computing continuum. The panel will discuss the evolution towards full blending of the traditional simulation-centric research with the new data-centric paradigms for scientific and industrial discovery and innovation, and what this will look like in 10 years from now.
Panel Moderator
Panel Members
PRACE Ada Lovelace Award for HPC
Other awards to be confirmed