Next generation platform architectures will require us to fundamentally rethink our programming models and environments due to a combination of factors including extreme parallelism, data locality issues, and resilience. As seen in the computational sciences community, asynchronous many-task (AMT) programming models and runtime systems are emerging as a leading new paradigm.
While there are some overarching similarities between existing AMT systems, the community lacks consistent 1) terminology to describe runtime components, 2) application- and component-level interfaces, and 3) requirements for the lower level runtime and system software stacks.
This panel will engage a group of community experts in a lively discussion on status and ideas to establish best practices in light of requirements such as performance portability, scalability, resilience, and interoperability. Additionally, we will consider the challenges of user-adoption, with a focus on the issue of productivity, which is critical given the application code rewrite required to adopt this approach.
- Robert Clay (Moderator) – Sandia National Laboratories
- Alex Aiken – Stanford University
- Martin Berzins – University of Utah
- Matthew Bettencourt – Sandia National Laboratories
- Laxmikant Kale – University of Illinois at Urbana-Champaign
- Timothy Mattson – Intel Corporation
- Lawrence Rauchwerger – Texas A&M University
- Vivek Sarkar – Rice University
- Thomas Sterling – Indiana University
- Jeremiah Wilke – Sandia National Laboratories
The panel will discuss what an open software stack should contain, what would make it feasible and what is not looking possible at the moment. The discussion is inspired by the fact that “this time, we have time” before the hardware actually reaches the market after 2020, so we can work on a software stack accordingly.
We will cover questions such as: Which would be the software development costs? What industries will migrate first? Would a killer app accelerate this process? Do we focus on algorithms to save power? How heterogeneous would/should “your” exascale system be? Is there a role for Co-design towards exascale? Is the Square Kilometre Array (SKA) project an example to follow? Would cloud computing be possible for exascale? Who will “own” the exascale era?
- Nicolás Erdödy (Moderator) – Open Parallel Ltd.
- Pete Beckman – Argonne National Laboratory
- Chris Broekema – Netherlands Institute for Radio Astronomy
- Jack Dongarra – University of Tennessee
- John Gustafson – Ceranovo, Inc.
- Thomas Sterling – Indiana University
- Robert Wisniewski – Intel Corporation
Procuring HPC systems is a challenging process of acquiring the most suitable machine under technical and financial constrains aiming at maximizing the benefits to the users’ applications and minimizing the risks during its lifetime.
In this panel, HPC leaders will discuss and debate on keys requirements and lessons learned for successful procurements of supercomputers.
How do we define the requirements of the system? Is it to acquire a system for maximizing the capacity and capability, assessing new/future technologies, deliver a system designed for specific applications or provide an all-purpose solution to a broad range of applications? Is the system just a status symbol or must it do useful work?
This panel will give the audience an opportunity to ask questions to panelists who are involved in the procurement of leadership-class supercomputers and capturing lessons learned and turning that hindsight into best practices to procure and the most suitable HPC system.
- Bilel Hadri (Moderator) – King Abdullah University of Science and Technology
- Katie Antypas – National Energy Research Scientific Computing Center
- Bill Kramer – University of Illinois at Urbana-Champaign
- Satoshi Matsuoka – Tokyo Institute of Technology
- Greg Newby – Compute Canada
- Owen Thomas – Red Oak Consulting