We are HPC Wales' national supercomputing provider. Our distributed network of computer systems and clusters provides resilience, security, and a scalable, easily accessible world-class resource for businesses and researchers.

We have two large computer hubs in the South of Wales and a number of Tier 1 and Tier 2 sites at Universities and business centres across Wales.  All systems are serviced by a dedicated network delivered over the Public Sector Broadband Aggregation (PSBA), featuring a 10 GBit/sec link between the two Hubs and 1 GBit/sec & 100MBit/sec links connecting all other sites.  

Take a virtual tour of our technology with our interactive system map

Cardiff Hub

The Cardiff hub is one of the two HPC Wales primary hubs and is located in the Advanced Research Computing at Cardiff Data Centre in Cardiff University. 

  • A Mixed Mode High Throughput  (HTC) Cluster  ( ~ 2000 Westmere Cores)
  • Capacity System ( ~ 6000 Sandy Bridge Cores)
  • Dedicated ‘Fat’ Nodes / Large Memory Systems
  • GPU System
  • Parallel, Lustre Based File System and Long Term NFS Storage
  • Test and Development Systems

High Throughput System (HTC)

Best suited to: workloads that require a high amount of memory per processor core, but with a corresponding lower processor core requirement.

162 Nodes (BX922 dual-processor nodes)

  • Windows & Linux Operating Systems
  • 12 cores & 36GB memory per node
  • Intel Xeon Westmere X5650
    • 6-core, 2.67 GHz, 3GB/core

Cardiff Capacity System

Best suited to: workloads requiring high parallel scalability and performance (large number of processors / memory / storage / very high speed network) with extended run times.

384 Nodes (CX250S1 dual-processor nodes)

  • 16 cores & 64GB memory per node
  • Intel Xeon Sandy Bridge E5-2670
    • 8 Core, 2.6 GHz, 4GB/core

Large memory systems

 

1 Node (RX900 X7550 node)

  • 64 cores and 512GB memory per node
  • Intel Xeon Nehalem X7550
    • 8-core, 2.00GHz, 8GB/core

4 Nodes (RX600 dual-processor nodes)

  • 16 cores and 128GB memory per node
  • Intel Xeon Nehalem X7550
    • 8-core, 2.00 GHz, 8GB/core

GPU Systems

Best suited to: workloads that would benefit from increased numbers of processors / cores but with reduced need for parallel communication (very large number of cores available on General Purpose-Graphic Processing Units [GPU] cards).

16 Nodes (CX270’s dual-processor nodes)

  • 16 cores & 64GB memory per node
  • Intel Xeon Sandy Bridge E5-2670
    • 8 Core, 2.6 GHz, 4GB/core
  • Tesla M2090 GPU
    • 1 Fermi GPU (512 CUDA Cores)
    • Memory Size 6Gb (177GB/Sec)

Storage

  • NFS - Long Term Storage
    • 75TB Permanent Storage
    • 90TB Archive Storage
  • Lustre - Parallel File System
    • 200TB storage
    • Data throughput of 3.5GB/sec

Network

  • Infiniband non-blocking QDR network
    • 1.2 usec latency
    • 40Gbps bandwidth
    • Compute interconnect (MPI Passing) & fast storage access
  • Ten Gigabit Ethernet
    • Permanent storage access

Swansea Hub

The Swansea Hub is located in the Dylan Thomas Centre in Swansea and provides HPC Wales users with:

  • A large capability cluster ( ~ 4000 Sandy Bridge Cores)
  • A medium size Sandy Bridge capacity cluster ( ~ 2000 Sandy Bridge Cores)
  • GPU System and Fat Node Systems
  • Parallel, Lustre Based File System and Long Term NFS Storage

Swansea Capacity System

Best suited to: workloads requiring high parallel scalability and performance (large number of processors / memory / storage / very high speed network required) with standard run times.

128 Nodes (CX250S1 dual-processor nodes)

  • 16 cores and 64GB memory per node
  • Intel Xeon Sandy Bridge E5-2670
    • 8 Core, 2.6 GHz, 4GB/core

Swansea Capability System

Best suited to: mixed sized workloads (large / small processor, large / small memory, storage / very high speed network) with shorter run times.

240 Nodes (CX250S1 dual-processor nodes)

  • 16 cores and 64GB memory per node
  • Intel Xeon Sandy Bridge E5-2690
    • 8 Core, 2.9 GHz, 4GB/core

Swansea Fat Node Systems

Best suited to:workloads that require a high amount of memory per processor core, but with a corresponding low processor core requirement.

16 Nodes (CX250S1 dual-processor nodes)

  • 16 cores and 128GB memory per node
  • Intel Xeon Sandy Bridge E5-2670
    • 8 Core, 2.6 GHz, 4GB/core

GPU Systems

Best suited to: workloads that would benefit from increased numbers of processors / cores but with reduced need for parallel communication (very large number of cores available on General Purpose-Graphics Processing Units [GPU] cards).

16 Nodes (CX270’s dual-processor nodes)

  • 16 cores and 64GB memory per node
  • Intel Xeon Sandy Bridge E5-2670
    • 8 Core, 2.6 GHz, 4GB/core
  • Tesla M2090 GPU
    • 1 Fermi GPU (512 CUDA Cores)
    • Memory Size 6Gb (177GB/Sec)

Storage

  • NFS - Long Term Storage
    • 100TB Permanent Storage
  • Lustre - Parallel File System
    • 400TB storage
    • Minimum data throughput of 3.5GB/sec

Network

  • Infiniband non-blocking QDR network
    • 1.2 usec latency
    • 40Gbps bandwidth
    • Compute interconnect (MPI Passing) & fast storage access
  • Ten Gigabit Ethernet
    • Permanent storage access

Tier 1 sites

HPC Wales has three Tier 1 sites. Tier 1 Sites are designed for small to moderate sized jobs as well as initial development and testing of complex, large scale code with the aim of migrating to one of the larger hubs on successful execution for production purposes. The three Tier 1 facilities are located at Bangor, Aberystwyth and Glamorgan.  These facilities provide HPC Wales users with:

  • A Medium sized cluster ( ~ 650 Westmere Cores)
  • Short Term NFS Storage

Medium Cluster System

54 Nodes (BX922 dual-processor nodes)

  • 12 cores and 36GB memory per node
  • Intel Xeon Westmere X5650
    • 6-core, 2.67 GHz, 3GB/core          

Storage

  • NFS - Short Term Storage
    • 8TB Short Term Storage

Network

  • Infiniband non-blocking QDR network
    • 1.2 usec latency
    • 40Gbps bandwidth
    • Compute interconnect (MPI Passing) and fast storage access
  • Ten Gigabit Ethernet
    • Permanent storage access

Tier 2A sites

HPC Wales has two Tier 2A sites. Tier 2A deployments are designed for small sized jobs, ideally utilised as proof of concept platforms. These facilities are small clusters, that have dual personality (Windows/Linux) that allow HPC Wales users to test their code and gauge whether their code is suitable for High Performance Computing while working with HPC Wales dedicated specialists. 

These systems are also ideal for applications that do not have a heavy reliance on interconnect (networking) or high I/O requirement.  The Tier 2A facilities are located at Swansea Met University and Glyndwr University.  

Small Cluster System

13 Nodes (BX922 dual-processor nodes)

  • 12 cores and 36GB memory per node
  • Intel Xeon Westmere X5650
    • 6-core, 2.67 GHz, 3GB/core            

Storage

  • NFS - Short Term Storage
    • 8TB Short Term Storage

Network

  • Ten Gigabit Ethernet
    • Permanent storage access
    • Compute interconnect

Interested?

If you would like to know more or discuss a project idea, get in touch.

Get in touch