These courses enable users to get on our systems, be able to modify or create research code, and then run it. This is an introduction knowledge level pathway. It is expected that this is suit able for first and second year PhD Students.

This concise and focused course introduces you to OpenMP, empowering researchers to quickly get started with parallel programming and explore hybrid job submissions using MPI and OpenMP. It begins with an overview of parallelism, covering essential concepts such as parallel paradigms, algorithm design, and the shared vs. distributed memory models. These foundational topics prepare you to think in parallel and design efficient applications. The course then transitions into OpenMP-specific concepts, showcasing how to integrate OpenMP into your workflows, run parallel code, and leverage its advantages over low-level threading frameworks.

The program also delves into practical application writing, focusing on essential directives like parallel and for, managing shared and private variables, avoiding race conditions, and using schedulers. Synchronisation mechanisms, such as barriers, locks, and atomics, are explored to maintain data integrity in complex computations. Finally, the course introduces hybrid parallelism, combining OpenMP with MPI to harness the best of both shared and distributed memory models. By the end of the course, you’ll understand how to create, submit, and optimise hybrid applications, making it an ideal primer for researchers aiming to expand their computational capabilities.

Skill Level: Intermediate

As part of the HAI-End project, Durham University has developed this set of revision materials to support training events on performance analysis in high-performance computing. These mini-lectures are designed to help participants quickly revisit essential terminology and foundational concepts whenever needed, ensuring that everyone remains on the same page during workshops and practical sessions.

You are encouraged to explore these lectures and accompanying exercises to deepen your understanding of key topics such as the von Neumann architecture, cache memory, vectorisation, Flynn's Taxononomy, MPI, GPUs, the Roofline model, shared memory parallel paradigms, and both strong and weak scaling.

This course is designed to be flexible, allowing you to build your own personalised learning pathway. You may start from any topic of interest. Based on your choice, the knowledge graph available will recommend prerequisite sessions you should be familiar with, as well as suggest subsequent sessions to continue your learning journey effectively.

By engaging with this content, you will strengthen your grasp of these fundamental building blocks, enabling a clearer and more comprehensive context for analysing the performance of your code.

Skill Level: Beginner