NUSIT HPC

Home » HPC & GPU Systems

Available HPC Systems

NUS IT manages and operates three (3) HPC systems, each designed with unique configurations to cater to a diverse range of computational workloads. These systems support research and academic needs across various disciplines, including AI, simulations, genomics, fluid dynamics, data analytics, and other scientific computations.

Each HPC system is optimized for specific use cases, offering different levels of compute power, storage capacity, and network latency, such as GPUs for deep learning and Infiniband for parallel computing.

Below, you will find detailed specifications of each system, including their hardware configurations and access procedures. Whether you are running large-scale simulations, training machine learning models, or conducting intensive data analysis, our HPC infrastructure is designed to support your computational needs efficiently.

Hopper System (HPC-AI)

The Hopper system is the flagship HPC-AI system with H100 and H200 GPU accelerators to support large-scale AI workloads

How to access: the GPU resources are allocated bi-yearly through a call-for-project applications around February or August. Please be on the lookout for the announcement.

Chargeback: TBA

Component Model
No. of GPU Nodes
6 (40 in progress)
GPU Type
8x NVIDIA H100 80GB
CPU Type
2 x Intel 8480+ 56-cores
Memory
2 TB
Network
Infiniband NDR (400Gb) - GPU to GPU

Ethernet 100GbE - Storage Fabric
Storage
(Scratch) BeeGFS - 200TB

(Home & Project) Dell-EMC Isilon - 1.6PB

Vanda System (Mixed CPU & GPU)

The Vanda system is a liquid-cooled HPC system hosted in the STDCT with a mix of CPU and mid-range GPUs to support general scientific workloads

How to access: the CPU & GPU resources are allocated bi-yearly through a call-for-project applications around February or August. Please be on the lookout for the announcement.

Chargeback: TBA

CPU Partition
Component Model
#CPU Nodes
168
CPU Type
2 x Intel 8452Y 36-cores
Memory
512 GB
Network
Ethernet 100GbE
Storage
(Scratch) Dell-EMC Isilon - 400TB

(Home & Project) Dell-EMC Isilon - 1.6PB
GPU Partition
Component Model
#GPU Nodes
102
GPU Type
2x NVIDIA A40 48GB
CPU Type
2 x Intel 8452Y 36-cores
Memory
512 GB
Network
Ethernet 100GbE
Storage
(Scratch) Dell-EMC Isilon - 400TB

(Home & Project) Dell-EMC Isilon - 1.6PB

Atlas System (CPU)​

The Atlas system is a general purpose HPC system with CPU resources to support lower-intensity scientific or education workloads. 

How to access: NUS staff and students can access the system by registering to HPC account (via the following link)

Chargeback: Free Access

Component Model
#CPU Nodes
480+
CPU Type
Various (24 cores to 96 cores)
Memory
128 - 384 GB
Network
Ethernet 10 GbE
Storage
(Scratch) Dell-EMC Isilon - 300TB

(Home & Project) Dell-EMC Isilon - 1.6PB