HPC Transforms DoD, DOE, and Industrial Product Design, Development, and Acquisition
Supercomputing has been shown to enable massive reductions in product development time, significant improvements in product capability, greater design innovation in new products, and effective systems engineering implementations.
Our panelists will share their intimate knowledge of the various methods and practices by which these results have been achieved in the U.S. Departments of Defense & Energy, and in industry.
Topics will include the Digital Thread & Twin of Air Force acquisition; the development and deployment of physics-based engineering analysis and design software for military aircraft, ships, ground vehicles, and antennas; high fidelity predictive simulation of challenging nuclear reactor conditions; accessibility in the era of hacking and exfiltration; STEM education using HPC; and cultural barriers to organizational adoption of HPC-based product development.
Audience questions and contributions to the list of key enablers and pitfalls for the implementation of HPC-based product development within both government and industry will be encouraged and discussed.
- Loren Miller (Moderator) – DataMetric Innovations, LLC
- Christopher Atwood – U.S. Department of Defense High Performance Computing Modernization Program and CREATE Program
- Col. Keith Bearden – United States Air Force
- Douglas Kothe – Oak Ridge National Laboratory
- Edward Kraft – United States Air Force
- Lt. Col Andrew Lofthouse – United States Air Force Academy
- Douglass Post – U.S. Department of Defense High Performance Computing Modernization Program and CREATE Program
Return of HPC Survivor: Outwit, Outlast, Outcompute
Back by popular demand, this panel brings together HPC experts to compete for the honor of “HPC Survivor 2015”. Following up on the popular Xtreme Architectures (2004), Xtreme Programming (2005), Xtreme Storage (2007)), Build Me an Exascale (2010), and Does HPC Really Matter? (2014) competitions, the theme for this year is “HPC Transformed: How to Reduce/Recycle/Reuse Your Outdated HPC System.”
The contest is a series of “rounds,” each posing a specific question about system characteristics and how that affects its transformation to new and exciting uses. After contestants answer, a distinguished commentator furnishes additional wisdom to help guide the audience. At the end of each round, the audience votes (applause, boos, etc.) to eliminate a contestant. The last contestant left wins.
While delivered in a light-hearted fashion, the panel pushes the boundaries of how HPC can/should affect society in terms of impact, relevancy, and ROI.
- Cherri Pancake (Moderator) – Oregon State University
- Robin Goldstone – Lawrence Livermore National Laboratory
- Steve Hammond – National Renewable Energy Laboratory
- Jennifer M. Schopf – Indiana University
- John E. West – The University of Texas at Austin
HPC and the Public Cloud
Where high-performance computing collides with cloud computing, just about the only point where most interested and informed parties agree is that the overlap is incomplete, complex, and dynamic. We are bringing together stakeholders on all sides of the issue to express and debate their points of view on questions such as:
- Which HPC workloads should be running in the public Cloud? Which should not?
- How will Cloud economics affect the choices of algorithms and tools?
- How does Cloud computing impact computational science?
- Is there a line to be drawn between “Big Data” and “HPC”? If so, where?
- Will Cloud HPC encourage or discourage innovation in HPC hardware and software?
- What is it about HPC that the Cloud providers “don’t get”?
- Kevin D. Kissell (Moderator) – Google
- Jeff Baxter – Microsoft Corporation
- Shane Canon – Lawrence Berkeley National Laboratory
- Brian Cartwright – MetLife Insurance Company
- Steve Feldman – CD-adapco
- Bill Kramer – University of Illinois at Urbana-Champaign
- Kanai Pathak – Schlumberger Limited
In Situ Methods: Hype or Necessity?
Due to the widening gap between the FLOP and I/O capacity of HPC platforms, it is increasingly impractical for computer simulations to save full-resolution computations to disk for subsequent analysis.
“In situ” methods offer hope for managing this increasingly acute problem by performing as much analysis, visualization, and related processing while data is still resident in memory. While in situ methods are not new, they are presently the subject of much active R&D, though as yet are not widespread in deployment or use.
This panel examines different aspects of in situ methods, with an eye towards increasing awareness of the current state of this technology, how it is used in practice, and challenges facing widespread deployment and use. The panel will also explore the issue of whether in situ methods are really needed or useful in the first place, and invites discussion and viewpoints from the SC community.
- Wes Bethel (Moderator) – Lawrence Berkeley National Laboratory
- Patrick O’Leary – Kitware, Inc.
- John Clyne – National Center for Atmospheric Research
- Venkat Vishwanath – Argonne National Laboratory
- Jacqueline Chen – Sandia National Laboratories
Please note: these panels and the rest of the Technical Program are open to exhibitors on November 20th.