Parallel Programming Languages, Libraries, Models and Notations
TimeMonday, November 13th8:30am - 5pm
DescriptionYou will learn how modern supercomputers are organized and how highly parallel nodes will affect the design of your applications. Whether you use multicore, manycore, or GPU-accelerated nodes, the same basic concepts apply to your algorithm and data structure design: parallelism management and data management. For parallelism management, you will learn what kinds of parallelism can be productively exploited on different architectures. You will also learn how to write programs that use parallelism on different architectures effectively. Data management means managing the implicit or exposed memory hierarchy. On a multicore or manycore node, this includes improving cache utilization. On nodes with high bandwidth memory (manycore and GPU), data management includes minimizing traffic between the levels of the exposed memory hierarchy.
This tutorial will focus on the OpenACC API. Additional information about other programming models (OpenMP, CUDA, OpenCL, MPI) will be included where appropriate. The tutorial will include prepackaged hands on sessions. We plan to provide access to three types of systems: X86 multicore with attached GPUs, supporting both multicore and GPU programming; Power multicore with attached GPUs, similarly; Xeon Phi KNL manycore, supporting manycore programming. This allows attendees to experiment with and port between different systems.