Compute Canada

HPC Summer School 2014 Programme

Home | Programme | Registration | Registrant Info | Venue | U of O | Contact Us

 

Programme

Day 1 (Monday, July 7, 2014) 

0900 - 1200, 1300-1600 Shared-Memory Programming With OpenMP

In recent years, a “multi-core revolution” has taken place, affecting virtually any computer from large SMP machines in supercomputing facilities down to smart phones. To exploit the enhanced capabilities of such systems as a programmer, it is necessary to learn the basic principles of shared-memory parallel programming, also termed “multi-threading”. The use of multi-threading has the potential to speed up virtually any application even on a single-core system due to greater responsiveness and more efficient use of modern CPU's and memory.

In this course, we offer an introduction to the OpenMP compiler directives which enable the conversion of serial programs to parallel through the addition of “local compilation flags” to the code. This is arguably the easiest way of multi-threading as it combines flexibility and power with conceptual simplicity. The course is directed at scientists and engineers who want to use multi-threading techniques in their applications to make use of the enhanced resources offered by shared-memory parallel computers. No prior knowledge of parallel programming is required, but some background in the C or Fortran languages is assumed. The course includes short practical lab sessions to give participants an opportunity for some hands-on experience. These sessions will be conducted on dedicated resources at HPCVL.

Instructor:  Hartmut Schmider, HPCVL, Queen's University.
Prerequisites: Basic Fortran or C programming.

Notes: OpenMP-HandoutsSource: OMPcode

 

Day 2 (Tuesday, July 8, 2014) 

900-1200 & 1300 - 1600 Distributed-Memory Programming With MPI
Part 1: Basic MPI 

The MPI (Message Passing Interface) API is a widely used standard set of interfaces for programming parallel computers ranging from multicore laptops to large-scale SMP servers and clusters. Its versatility and wide range of applicability make it a standard system for high-performance programming.

This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms. We do not require any prior background in parallel computing, but some experience with programming in either Fortran or C is useful. 

The content of the course ranges from introductory to intermediate. On the first day, we give a brief introduction to parallel programming and introduce MPI using simple examples. We outline the usage of about a dozen routines to familiarize users with the basic concepts of MPI programming. We then discuss some simple parallel models that can be programmed with this limited set of MPI routines. We also discuss the distribution of memory.

Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.

Instructor:  Hartmut Schmider, HPCVL, Queen's University.
Prerequisites: Basic Fortran or C programming.

Notes: MPI-HandoutsSource: MPIcode

 

Day 3 (Wednesday, July 9, 2014)

0900-1200 & 1300-1600 Distributed-Memory Programming With MPI
Part 2 : Intermediate MPI 

Continuation of the previous day: We move on to more advanced issues of MPI programming, such as the definition and usage of user-defined data types and the usage of parallel input-output. In the afternoon we discuss at some length how the combination of MPI programming with simple multi-threading through compiler directives (OpenMP) can dramatically improve the utilization of modern clusters.

Instructors:  Hartmut Schmider and Gang Liu, HPCVL, Queen's University.
Prerequisites: Basic Fortran or C programming.

Notes: see Day 2Source: see Day 2

 

Day 4 (Thursday, July 10, 2014)

0900-1200 & 1300-1600 CUDA Part 1

This is an introductory course covering programming and computing on GPUs---graphics processing units---which are an increasingly common presence in massively parallel computing architectures. This session will cover both of the available C-like programming frameworks: NVIDIA’s CUDA-C a. The basics of GPU programming will be covered, and students will work through a number of hands on examples. Demonstrations of profiling and debugging applications running on the GPU will also be included. The structuring of data and computations that makes full use of the GPU will be discussed in detail. Students should be able to leave the course with the knowledge necessary to begin developing their own GPU applications.

Instructors:  Pawel Pomorski, SHARCNET, University of Waterloo and Sergey Mashchenko, SHARCNET, McMaster University.
Prerequisites: C/C++ scientific programming, experience editing and compiling code in a Linux environment. Some experience with CUDA and/or OpenMP a plus.

Notes: summer_school_CUDA1Source: CUDA1_code_examples

 

Day 5 (Friday, July 11, 2014)

0900-1200 & 1300-1600 CUDA Part 2

See Day 4.

Instructors:  Pawel Pomorski, SHARCNET, University of Waterloo and Sergey Mashchenko, SHARCNET, McMaster University.
Prerequisites: C/C++ scientific programming, experience editing and compiling code in a Linux environment. Some experience with CUDA and/or OpenMP a plus.

Notes: CUDA_advancedSource: CUDA2_east

 

Please revisit this page for updates