February 27, 2024 2021 Development and Deployment of ART Kit recognition system with Deep Learning Deployment of Alphafold2 software on NUS HPC GPU Cluster for predicting protein structure to aid in related research. Completed expansion of GPU cluster to include four more nodes. Completed Text Mining on staff annual survey.
February 27, 2024 2020 Development and Deployment of Campus Zoning Engine as part of NUS pandemic management measures. Development and Deployment of campus WIFI Analytics and map dashboard. To aid in campus planning.
February 27, 2024 2019 Introduction of high-speed low-latency SSD storage and parallel file system with 100Gbps network to accelerate data-centric Machine Learning/Deep Learning. Completed the Phase I of HPC cluster migration to the Cloud. Launch of the central data masking system to improve ease of use and to enable secured data sharing in research collaboration.
February 27, 2024 2018 Introduction of high-speed low-latency SSD storage and parallel file system with 100Gbps network to accelerate data-centric Machine Learning/Deep Learning. Completed the Phase I of HPC cluster migration to the Cloud. Launch of the central data masking system to improve ease of use and to enable secured data sharing in research collaboration.
February 27, 2024 2017 Development and Deployment of ART Kit recognition system with Deep Learning Deployment of Alphafold2 software on NUS HPC GPU Cluster for predicting protein structure to aid in related research. Completed expansion of GPU cluster to include four more nodes. Completed Text Mining on staff annual survey.
February 27, 2024 2016 Introduction of high-speed low-latency SSD storage and parallel file system with 100Gbps network to accelerate data-centric Machine Learning/Deep Learning. Completed the Phase I of HPC cluster migration to the Cloud. Launch of the central data masking system to improve ease of use and to enable secured data sharing in research collaboration.
April 4, 2024 2015 Establishment of a long-distance InfiniBand network connection with the National Supercomputing Centre (NSCC) to enable high-speed access to HPC resources by NUS researchers. Introduced Data Analytics as a new application domain in HPC support with the inclusion of R and Python support.
April 4, 2024 2014 Establishment of a Data-Centric Infra Development Strategy to deliver a scalable and reliable storage and high-speed network infrastructure to support data-intensive research.
April 4, 2024 2013 Launch of iRODS Data Management and Sharing system to enable research data archiving and collaboration. Launch of Utility Storage Service with Offsite replication support and a total capacity of around one Petabytes.
April 4, 2024 2012 Launch of the HPC Cloud Service (Pay-Per-Use service), which allows researchers to acquire dedicated HPC resources with quick turnaround time and flexibility. Launch of the HPC managed service (Condominium service) to free researchers from HPC system operation and maintenance chores. Introduction of two new HPC clusters with a total of 2240 CPU cores, expanding the HPC cluster capacity by more than 70%. The new clusters come with 8 fat nodes, each with 40 cores and 256GB of memory. Introduction of a new GPU system with more than 16,000 GPU cores.
April 4, 2024 2011 Completion of the HPC data centre development, which enabled the expansion of various HPC resources and services to meet demand. Introduction of a new Infiniband network (bandwidth capacity of 40Gbps) to integrate all HPC clusters and parallel file system with high-speed interconnection.
April 4, 2024 2010 Introduction of a new HPC cluster with hexa-core CPUs, added a total of 1152 cores or more than 50% of capacity to the HPC cluster pool. The central HPC facility delivered more 10,000 research simulations a month.
April 4, 2024 2009 Implementation of the GPFS based parallel file system with a capacity of 120TB as a high-performance work-space for data intensive applications. Launch of the HPCBio Portal as a convenient web-based access to more than 20 Bio-medical related applications. Conclusion of the HPC Challenge with some winning projects achieved up to 80 times speedup for their simulations.
April 4, 2024 2008 User authentication and access control was integrated with the central Active Directory to enable single account and password access to both HPC and non-HPC resources and services. User home directory was expanded and integrated with the central storage and file system to enable seamless access of files and data across laptop, desktop and HPC systems. HPC Portal was upgraded to enable online account registration, cutting the account application time from days to around one hour. The second multi-core HPC cluster was introduced with a total of 768 cores, expanding the overall cluster computing pool to more than 1200 CPU cores.
April 4, 2024 2007 The University introduced the first multi-core server cluster with a total of 336 processor cores, doubling its HPC capability to an aggregated computing power of 1.99 Teraflops for researchers. The first Windows-based HPC cluster was introduced to provide staff with relentless parallel computing resources right from their desktop PC.
April 4, 2024 2006 TCG@NUS (Tera-scale Campus Grid at NUS) clinched the winning CIO Award 2006, beating more than 100 nominations from the public and private sectors in the region. The award further recognizes the cross-faculty efforts in harnessing idle computing cycles from existing desktop PCs on campus.
April 4, 2024 2005 Planned PC Grid expansion to include up to 1000 PCs. Data Grid to support BioInformatics applications.
April 4, 2024 2004 The Grid Innovation Zone was established with IBM and Intel to promote Grid computing technology. The following were developed as part of the NUS Campus Grid project: Grid Portal First Access Grid node on campus. Adoption of IA64 technology and further expansion of cluster system had raised the capacity further to 844.84Gflops.
April 4, 2024 2003 The first Grid computing system (PC Grid with 120 PCs) was developed. The combination of Grid and cluster implementation had further boosted the computing capacity by another 3 folds (593.80Gflops).
April 4, 2024 2002 Adoption of open source based cluster technology had boosted the HPC capacity by more than 3 folds (193.16Gflops). Implementation of the high-performance remote visualisation system.
April 4, 2024 2000 The installation of the Compaq Alpha HPC systems had boosted the HPC capacity further by more the 4 folds (52.36Gflops). The SAN storage was introduced to enhance the storage capacity for HPC.
April 4, 2024 1998 The installation of the SGI Origin2000 HPC system and the adoption of the cc-NUMA architecture had boosted the HPC capacity by about 4 folds (9.52Gflops).
April 4, 2024 1995 The Supercomputing & Visualisation Unit was set up at Computer Centre to support and promote High Performance Computing on campus. NUS installed the first Cray Vector Supercomputer (Cray J90) in the region on campus (2.4Gflops). (Gflops – Billions of floating-points operation per second). NUS set up the Visualisation Laboratory at the Computer Centre to provide high-end scientific visualisation resources to support research activities on campus. The Laboratory was equipped with the state-of-the-art SGI Onyx visualisation system. An MOU was signed by NUS and SGI to promote high-end visualisation technology on campus.