Milestones

HPC Development Milestones at NUS Computer Centre

2021

Development and Deployment of ART Kit recognition system with Deep Learning

Deployment of Alphafold2 software on NUS HPC GPU Cluster for predicting protein structure to aid in related research.

Completed expansion of GPU cluster to include four more nodes.

Completed Text Mining on staff annual survey.

2020

• Development and Deployment of Campus Zoning Engine as part of NUS pandemic management measures.

Development and Deployment of campus WIFI Analytics and map dashboard. To aid in campus planning.

2019

• Introduction of high-speed low-latency SSD storage and parallel file system with 100Gbps network to accelerate data-centric Machine Learning/Deep Learning.

Completed the Phase I of HPC cluster migration to the Cloud.

Launch of the central data masking system to improve ease of use and to enable secured data sharing in research collaboration.

2018

• Established a foundation for Deep Learning support with the introduction of a HPC-AI cluster with the latest GPU technologies.

Launch of the 100Gbps research network, allowing high-speed data transfer among NUS research entities and with the National Supercomputing Centre. Researchers can also access central storage system at NUS IT through this network.

2017

• Launch of HPC Cloud at AWS to accelerate HPC resource scaling and new technologies introduction.

Launch of Hadoop Data Repository and Analytics System to support Big Data Analytics.

Formation of Data Engineering team to provide Data Analytics and AI/Machine Learning related research computing.

2016

• Establishment of HPC collaboration with the National Supercomputing Centre (NSCC) to support NUS research computing requirements through a multi-tier support arrangement where NUS IT will provide low- and mid-range HPC support while NSCC will cater for the high-end requirements.

2015

• Establishment of a long-distance InfiniBand network connection with the National Supercomputing Centre (NSCC) to enable high-speed access to HPC resources by NUS researchers.

Introduced Data Analytics as a new application domain in HPC support with the inclusion of R and Python support.

2014

• Establishment of a Data-Centric Infra Development Strategy to deliver a scalable and reliable storage and high-speed network infrastructure to support data-intensive research.

2013

• Launch of iRODS Data Management and Sharing system to enable research data archiving and collaboration.

• Launch of Utility Storage Service with Offsite replication support and a total capacity of around one Petabytes.

2012

Launch of the HPC Cloud Service (Pay-Per-Use service), which allows researchers to acquire dedicated HPC resources with quick turnaround time and flexibility.

Launch of the HPC managed service (Condominium service) to free researchers from HPC system operation and maintenance chores.

Introduction of two new HPC clusters with a total of 2240 CPU cores, expanding the HPC cluster capacity by more than 70%. The new clusters come with 8 fat nodes, each with 40 cores and 256GB of memory.

Introduction of a new GPU system with more than 16,000 GPU cores.

2011

• Completion of the HPC data centre development, which enabled the expansion of various HPC resources and services to meet demand.

Introduction of a new Infiniband network (bandwidth capacity of 40Gbps) to integrate all HPC clusters and parallel file system with high-speed interconnection.

2010

• Introduction of a new HPC cluster with hexa-core CPUs, added a total of 1152 cores or more than 50% of capacity to the HPC cluster pool.

The central HPC facility delivered more 10,000 research simulations a month.

2009

• Implementation of the GPFS based parallel file system with a capacity of 120TB as a high-performance work-space for data intensive applications.

Launch of the HPCBio Portal as a convenient web-based access to more than 20 Bio-medical related applications.

Conclusion of the HPC Challenge with some winning projects achieved up to 80 times speedup for their simulations.

2008

• User authentication and access control was integrated with the central Active Directory to enable single account and password access to both HPC and non-HPC resources and services.

User home directory was expanded and integrated with the central storage and file system to enable seamless access of files and data across laptop, desktop and HPC systems.

HPC Portal was upgraded to enable online account registration, cutting the account application time from days to around one hour.

The second multi-core HPC cluster was introduced with a total of 768 cores, expanding the overall cluster computing pool to more than 1200 CPU cores.

2007

•  The University introduced the first multi-core server cluster with a total of 336 processor cores, doubling its HPC capability to an aggregated computing power of 1.99 Teraflops for researchers.

The first Windows-based HPC cluster was introduced to provide staff with relentless parallel computing resources right from their desktop PC.

2006

•  TCG@NUS (Tera-scale Campus Grid at NUS) clinched the winning CIO Award 2006, beating more than 100 nominations from the public and private sectors in the region. The award further recognizes the cross-faculty efforts in harnessing idle computing cycles from existing desktop PCs on campus.

2005

•  Planned PC Grid expansion to include up to 1000 PCs. Data Grid to support BioInformatics applications.

2004

•  The Grid Innovation Zone was established with IBM and Intel to promote Grid computing technology.

The following were developed as part of the NUS Campus Grid project:

Grid Portal

First Access Grid node on campus. Adoption of IA64 technology and further expansion of cluster system had raised the capacity further to 844.84Gflops.

2003

•  The first Grid computing system (PC Grid with 120 PCs) was developed. The combination of Grid and cluster implementation had further boosted the computing capacity by another 3 folds (593.80Gflops)./p>

2002

•  Adoption of open source based cluster technology had boosted the HPC capacity by more than 3 folds (193.16Gflops).

Implementation of the high-performance remote visualisation system.

2001

•  Implementation of the high-performance remote visualisation system.

2000

•  The installation of the Compaq Alpha HPC systems had boosted the HPC capacity further by more the 4 folds (52.36Gflops).

The SAN storage was introduced to enhance the storage capacity for HPC.

1998

•  The installation of the SGI Origin2000 HPC system and the adoption of the cc-NUMA architecture had boosted the HPC capacity by about 4 folds (9.52Gflops).

1996

•  The number of research projects supported exceeded 100 for the first time.

1995

•  The Supercomputing & Visualisation Unit was set up at Computer Centre to support and promote High Performance Computing on campus.

NUS installed the first Cray Vector Supercomputer (Cray J90) in the region on campus (2.4Gflops). (Gflops – Billions of floating-points operation per second).

NUS set up the Visualisation Laboratory at the Computer Centre to provide high-end scientific visualisation resources to support research activities on campus. The Laboratory was equipped with the state-of-the-art SGI Onyx visualisation system. An MOU was signed by NUS and SGI to promote high-end visualisation technology on campus.