Compute Canada

HPC Summer School 2013 Programme

Home | Programme | Registration | Registrant Info | Venue | U of O | Contact Us

 

Note: This event is now concluded.

Programme

Day 1 (Tuesday, May 28, 2013) (single stream)

0900 - 1230 Linux Command Line: A Primer

Working with many of the HPC systems in Ontario involves using the command line. This provides a very powerful interface, but it can be quite daunting for the uninitiated. Become initiated with this course. This hands on session will cover basic commands and scripting, as well as touching on some powerful constructs like regular expressions. It could be a great boon for your productivity!

Instructor: Jonathan Dursi, SciNet, University of Toronto.
Prerequisites: None.

Notes: IntroToShell.pdfSource: IntroToShell.tgz

 

1330 - 1630 Introduction to High Performance Computing

This session will provide a literacy-based introduction to basic concepts of high performance computing. It is intended to be a high level primer for those largely new to HPC, and serve as a foundation upon which to build over the coming days. Topics will include motivation for HPC, essential issues, problem characteristics as they apply to parallelism and a high level overview of parallel programming models.  Strategies of running large sets of serial processes using e.g. GNU parallel, will also be presented.

Instructor: Jonathan Dursi, SciNet, University of Toronto.
Prerequisites: None.

Notes: IntroHPC.pdfSource: n/a

 

Day 2 (Wednesday, May 29, 2013) (dual stream)

Stream 1

900-1230 & 1330 - 1630 Introduction to Shared-Memory Programming With OpenMP

This workshop introduces the OpenMP compiler directives to users who want to write programs for shared-memory parallel computers, or to convert existing serial code to parallel. No previous knowledge about parallel programming is required, but some basic background in programming is assumed. The use of OpenMP has become the de facto industry standard for parallel programming on shared-memory machines, such as large SMP servers, or multicore desktops. Examples are in Fortran and C. Here is a basic outline of the contents:

  • Introduction to parallel programming on shared-memory machines
  • Thread programming and OpenMP compiler directives
  • Problems and Pitfalls of shared-memory programming
  • Loop parallelism
  • Explicit parallel regions
  • Thread synchronization
  • A quick look at OpenMP 3.0

Lectures are combined with hands-on lab exercises that are run on HPCVL cluster nodes.

Instructor: Hartmut Schmider, HPCVL, Queen's University.
Prerequisites:Basic Fortran or C programming.

Notes: OpenMP_Handouts.pdfSource: OpenMP_Code.tar

 

Stream 2

0900-1230 & 1330-1630 Programming GPUs with CUDA (Part 1)

This is an introductory course covering programming and computing on GPUs---graphics processing units---which are an increasingly common presence in massively parallel computing architectures. This two-day session will cover the most commonly used C-like programming framework: NVIDIA’s CUDA-C. The basics of GPU programming will be covered, and students will work through a number of hands on examples. Demonstrations of profiling and debugging applications running on the GPU will also be included. The structuring of data and computations that makes full use of the GPU will be discussed in detail. Students should be able to leave the course with the knowledge necessary to begin developing their own GPU applications.

Instructor: Pawel Pomorski, SHARCNET, University of Waterloo
Prerequisites: C/C++ scientific programming, experience editing and compiling code in a Linux environment. Some experience with CUDA and/or OpenMP a plus.

Notes: CUDA_day1.pdfSource: CUDA_day1.tar.gz

 

 

Day 3 (Thursday, May 30, 2013) (dual stream)

Stream 1

0900-1230 & 1330-1630 Introduction to Distributed-Memory Programming With MPI

This workshop introduces the Message Passing Interface (MPI) and is directed at users who want to acquire basic skills in parallelizing" code for distributed-memory clusters. No prior knowledge of MPI or other message-passing systems is required. However, some background in Unix operating systems and programming in Fortran, C, or other languages would be helpful. The following subjects will be addressed:

  • MPI Basics: Programming Environments, Data Types, Communication
  • Runtime Environments
  • Parallel Principles and Programming Steps
  • Some Parallel Models
  • Memory Distribution
  • User-Defined Data Types
  • Combination of MPI with OpenMP

Lectures are combined with hands-on lab exercises that are run on HPCVL cluster nodes.

Instructor: Hartmut Schmider, HPCVL, Queen's University.
Prerequisites: Basic Fortran or C programming.

Notes: MPI_Handouts.pdfSource: MPI_Code.tar

 

Stream 2

0900-1230 & 1330-1630 Programming GPUs with CUDA (Part 2)

This is the continuation of the previous day's workshop.

Instructor: Pawel Pomorski, SHARCNET, University of Waterloo
Prerequisites: C/C++ scientific programming, experience editing and compiling code in a Linux environment. Some experience with CUDA and/or OpenMP a plus.

Notes: CUDA_day2.pdfSource: CUDA_day2.tar.gz

 

Day 4 (Friday, May 31, 2013) (dual stream)

Stream 1

0900-1230 & 1330-1630 Combining MPI with OpenMP: A Double-Layer Master-Slave Model

Recent years have seen the growing availability of multi-node clusters with multi-core nodes. For such clusters a "hybrid approach" suggests itself. This approach combines message passing using MPI with multithreading using OpenMP to achieve optimal utilization of these resources. In this half-day course we discuss basic principles and pitfalls of MPI/OpenMP combination, and illustrate them with the example of a recently developed – and freely available – library. The latter implements a “double-layer master-slave” parallel model and requires a minimum of user programming.

Instructor: Gang Liu, HPCVL, Queen's University.
Prerequisites: Basic programming (C or Fortran), basic knowledge of OpenMP compiler directives or MPI message passing (e.g. courses on Days 2 and 3).

Notes: Mixed_Handouts.pdfSource: Mixed_Code.tar

 

Stream 1

0900-1230 & 1330-1630 The HPCVL Working Template: A Tool For Parallel Programming

In this half-day course, we will learn how to use the HPCVL Working Template (HWT), a tool for high performance programmers. It has been developed at HPCVL since 2002. The tool supports any combination of serial, OpenMP multithreaded, or MPI parallelized code, in FORTRAN 90, C and C++.

There are three main functionalities: management of multiple versions, simple profiling, and relative debugging.

In many cases, it is necessary to keep multiple versions of code, such as serial and different types of parallel, for different platforms, or various purposes such as debugging or profiling. It is best to maintain these versions from a single original source code, which can be done by using preprocessor constructs. The HWT helps users to detect source code and generate an arbitrary number of versions.

The HWT can also be used to collect CPU timing data for any section of the code from multiple test runs and eventually arrange them in the form of simple timing tables together with the corresponding speedups.

Finally, users can insert calls to the HWT library, and generate identifiers to label data that are produced from the code for the purpose of debugging. The labels are stored together with the data in temporary files and later checked by the HWT debugger against references data, for instance from a serial version of the code. The data to be compared may be private, shared, duplicated or distributed, and generated in any order. The comparison with the reference is done by the debugger without the need for user interference.

Instructor: Gang Liu, HPCVL, Queen's University.
Prerequisites: Basic programming (C or Fortran), some knowlege of parallel programming is useful (e.g. courses on Days 1-3).

Notes: HWT_Handouts.pdfSource: HWT_Code.tar

 

 

Stream 2

0900-1230 & 1330-1630 Parallel Programming with the Posix Thread Library

This workshop is for programmers and scientists with a basic background in C programming, who want to increase the flexibility and responsiveness of their code and take advantage of modern multicore and multi-threaded computer architectures. It is an introduction to the Posix Thread Library and its application to the parallelization of C programs. We assume no prior knowledge of multithreading or parallel programming, but some background in Unix operating systems and programming in C will be necessary. The lectures include demonstrations of example programs on a multicore machine. The following subjects will be

  • Parallel Programming and Multithreading
  • The Posix Thread Library, Basics of Thread Programming
  • Creating and Manipulating of Threads
  • Synchronization, Locks and Condition Variables
  • Thread-Specific Data and Destructors

Instructor: Hartmut Schmider, HPCVL, Queen's University.
Prerequisites: Basic programming in C.

Notes: Posix_Handouts.pdfSource: Posix_Code.tar

 

Important: Most of the workshops have a hands-on lab component that will be conducted on servers supplied by the HPC consortia. Since the lecture rooms do not provide desktop computers, please bring your laptop if you want to participate in the exercises.

 

Please revisit this page for updates