Skip to main content

Unit information: Advanced High Performance Computing in 2018/19

Please note: you are viewing unit and programme information for a past academic year. Please see the current academic year for up to date information.

Unit name Advanced High Performance Computing
Unit code COMS30006
Credit points 10
Level of study H/6
Teaching block(s) Teaching Block 2 (weeks 13 - 24)
Unit director Professor. McIntosh-Smith
Open unit status Not open
Pre-requisites

Students must have taken the “Introduction to High Performance Computing” companion course in TB1. This course also assumes students are competent C programmers.

Co-requisites

None

School/department Department of Computer Science
Faculty Faculty of Engineering

Description including Unit Aims

The aim of this unit is to explore advanced concepts and technologies relating to high performance computing, and to offer practical hands-on use of and experience with said technologies. Students completing the unit should have had an opportunity to integrate content from other units in the programme, for example implementing high performance parallel versions of algorithms previously encountered. Students will be exposed to advanced trends in computer hardware that are driving development towards massive parallelism in hardware and software. They will also use the advanced feature sets in mainstream parallel programming languages, such as OpenMP, MPI and OpenCL, all in the context of a real supercomputer environment: the university’s Blue Crystal cluster.

Intended Learning Outcomes

On successful completion of this unit, students will be able to:

  • Understand state-of-the-art high performance computing technologies, and select the right one for a given task; 

  • Utilise said technologies through appropriate programming interfaces (e.g., specialist languages, additions to standard languages or via libraries or compiler assistance); 

  • Analyse, implement, debug and profile high performance algorithms as realised in software.

Specific learning outcomes will be tackled through focused coursework activities, including: 


  • Mastering shared memory multi-core parallelisation through approaches such as OpenMP 

  • Becoming experienced with advanced distributed memory parallelism concepts, through exploring message passing APIs such as MPI 

  • Learning to use cutting-edge parallel hardware, such as many-core CPUs and GPUs
  • Becoming familiar with emerging parallel programming approaches, such as parallel dialects of C++ (Kokkos, SYCL), and parallel tasking frameworks such as TBB, OpenMP 4.5 and HPX.

Teaching Information

Delivery via lectures (2 hours per week) and active learning labs (2 hours per week).

Assessment Information

100% coursework assessed, two assignments one with 25%, one with 75% weighting. Assignments will be to write optimised parallel code using mainstream parallel programming languages, such as OpenMP, MPI and OpenCL. Working source code must be submitted, along with a good quality report describing what the student did (2-3 pages per report).

Reading and References

  • D.A. Patterson and J.L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. Morgan Kaufman, ISBN: 1-558-60604-1
  • Grama, G. Karypis, V. Kumar and A. Gupta. Introduction to Parallel Computing (2nd Edition). Addison Wesley, ISBN: 0201648652

  • Chapman, G. Jost and R. van der Pas. Using OpenMP: Portable Shared Memory Parallel Programming. MIT Press, ISBN: 0262533022 

  • P. Pacheco. Parallel Programming with MPI. Morgan Kaufmann, ISBN: 1558603395

Feedback