- Software Engineering in a Cluster Environment
- Debugging, Profiling and Optimising Serial Codes in a Cluster Environment
- Thinking In Parallel
- Introduction to Parallel Programming with OpenMP
- Introduction to Parallel Programming with MPI
This course teaches how to use three important tools to assist the development of software in a number of programming languages. The first tool is make which is an open source dependency-tracking build system. Make allows you to automate the complicated process of compiling large programs spread across multiple source files.
The second tool is subversion (or svn) which is an open source software versioning and revision control system. Subversion allows you to keep track of changes made to source code so that you can recover older versions of your code, or examine the history of changes made to the source code over time.
The third aspect introduces how to report bugs effectively and provides an understanding of how software developers manage bugs though their lifecycle, to resolution..
This course teaches how to use a number of tools to assist in the debugging, profiling, and optimising of serial codes in a cluster environment. A range of typical bugs will be explored with practical examples on how to locate and eliminate them using both the Gnu command line debugger (GDB), and a graphical debugger (Allinea DDT).
The course includes profiling a number of serial codes in order to discover hot spots using the Gnu command line profiler (GPROF) and explores a range of optimisation strategies to improve the performance of a number of serial codes including code restructuring and compiler optimisations using the Intel compilers.
This course provides an overview on how to parallelise problems in a language-agnostic way, to set the scene for using OpenMP and MPI. It also acquaints the participant with the terminology on how to develop parallel applications/codes, informs them about the issues involved when developing parallel applications/codes and enables them to decide on an approach for developing a parallel version of their own application/code.
OpenMP is a standard for writing parallel codes to run on a shared memory computer, node or multi-core chip, also referred to as multi-threading. This course introduces the OpenMP compiler directives and library routines that can be added to an existing serial code and introduces the concepts and essential syntax of OpenMP, including functionality introduced at version 3.0. The course combines both theory and practical exercises and provides an overview of performance issues.
MPI (Message Passing Interface) is the de facto standard for parallel programming, defining how concurrent processes can communicate and hence work together to complete a given task in a shorter time.
The course provides a detailed introduction to programming using MPI. It includes an in-depth look at point-to-point and collective communication, as well as introducing the useful topics of MPI derived data types, groups and communicators. It gives an overview of available message passing libraries. The course combines both theory and practical exercises.
This course demonstrates to participants how to use a range of numerical and statistical analysis features available in the NAG C/C++ and Fortran numerical libraries.
This course provides a range of visualization techniques used with scientific codes, showing how existing source code can be amended to make it produce output suitable for visualization. Appropriate visualization tools will be introduced including ParaView and VisIt and participants will be instructed in their operation.
If you would like to know more or discuss a project idea, get in touch.