Skip to main content

Unit information: High Performance Computing (Teaching Unit) in 2020/21

Please note: you are viewing unit and programme information for a past academic year. Please see the current academic year for up to date information.

Unit name High Performance Computing (Teaching Unit)
Unit code COMS30053
Credit points 0
Level of study H/6
Teaching block(s) Teaching Block 2 (weeks 13 - 24)
Unit director Professor. McIntosh-Smith
Open unit status Not open
Pre-requisites

COMS10016 Imperative and Functional Programming and COMS10017 Object-Oriented Programming and Algorithms I or equivalent.

COMS10015 Computer Architecture or equivalent.

COMS20007 Programming Languages and Computation or equivalent.

COMS20008 Computer Systems A and COMS20012 Computer Systems B or equivalent.

COMS20010 Algorithms II or equivalent.

Strong programming skills, experience with the C programming language, good knowledge of computer architecture.

Co-requisites

EITHER Undergraduate Year 3 must choose Assessment Unit High Performance Computing

OR M-Level students must choose Assessment Unit High Performance Computing.

Please note, COMS30053 is the Teaching Unit for High Performance Computing. Students can take this unit in either their third or fourth year, and must also choose the Assessment Unit for their year group.

School/department School of Computer Science
Faculty Faculty of Engineering

Description including Unit Aims

The aim of this unit is to introduce and explore exciting technologies relating to high performance computing, and to offer practical hands-on use of and experience with said technologies. Students completing the unit will have learned how to develop fast, efficient applications on the very latest advanced processors, including many-core CPUs and GPUs. Students should also have had an opportunity to integrate content from other units in the programme, for example implementing high performance parallel versions of algorithms previously encountered.

Students will be exposed to the underlying trends in computer hardware that are driving development towards massive parallelism in hardware and software. They will employ widely used parallel programming languages and tools, such as OpenMP, MPI, OpenCL, debuggers and profilers, all in the context of a real supercomputer environment: the university’s multi million pound Blue Crystal cluster.

Intended Learning Outcomes

On successful completion of this unit, students will be able to:

  1. Understand state-of-the-art high performance computing technologies, and select the right one for a given task.
  2. Utilise said technologies through appropriate programming interfaces (e.g., specialist languages, additions to standard languages or via libraries or compiler assistance).
  3. Analyse, implement, debug and profile high performance algorithms as realised in software.
  4. Understand how to optimise serial code on modern high-performance processors.
  5. Understand shared memory multi-core parallelisation through approaches such as OpenMP
  6. Develop massively parallel applications using the message passing parallel programming paradigm through the use of APIs such as MPI.
  7. Use software tools, such as debuggers, profilers etc.
  8. Use cutting-edge parallel hardware, such as many-core CPUs and GPUs.

In addition, students taking the unit at M-level will be able to:

  1. Understand advanced parallel programming approaches, such as parallel dialects of C++ (Kokkos, SYCL), and parallel tasking frameworks such as TBB, OpenMP 4.5 and HPX.
  2. Develop heterogeneous parallel programs, employing more than one type of processor at once.

Teaching Information

Teaching will be delivered through a combination of synchronous and asynchronous sessions, including lectures, practical activities and self-directed exercises.

Assessment Information

Coursework (100%) at appropriate levels for Year 3 and M-level students.

Reading and References

  • Patterson, D.A. and Hennessy, J.L., Computer Organization and Design: The Hardware/Software Interface (Morgan Kaufman, 2016) ISBN: 978-0128017333
  • Kumar, Vipin et al, Introduction to Parallel Computing, 2nd Edition (Addison Wesley, 2003) ISBN: 978-0201648652
  • Chapman, Barbara, Jost, Gabriele and Van der Pas, Ruud, Using OpenMP: Portable Shared Memory Parallel Programming (MIT Press, 2007) ISBN: 978-0262533027
  • OpenMP Common Core: Making OpenMP Simple Again – by Tim Mattson, Helen He, Alice Koniges (2019)
  • Using OpenMP – The Next Step – by Ruud van der Pas, Eric Stotzer and Christian Terboven (2017)
  • Pacheco, Peter, Parallel Programming with MPI (Morgan Kaufmann, 1996) ISBN: 978-1558603394

Feedback