High Performance Computing (HPC) is critical for supporting our NUS research community in their work to innovate and develop solutions that meet the present and emerging needs of Singapore, Asia and the world.
HPC combines hardware, software, tools and programming techniques to accelerate research computation in a research environment, which will in turn enable the execution of large cutting-edge research simulation that accelerates new discoveries.
Here’s what we did to make ‘great’ greater!
On top of the HPC resources and services provided to researchers through on-campus infrastructure, we also partner with commercial cloud service providers. Our intent is to promote advanced computing technologies to accelerate researchers’ work and gain a competitive edge in their research field.
This programme aims to provide free Amazon Web Services (AWS) cloud credits for eligible research projects within the NUS community. These cloud credits can be used by researchers to consume a wide range of AWS cloud resources and services such as EC2, S3, Fargate and etc. when conducting their project work.
As research projects usually require huge computing resources and involve a large amount of data, an infrastructure upgrade was carried out to expand the existing research data storage service to provide a centralised and secured on-demand service to store their research data. With the new procurement model, we were able to provide a more attractive option to researchers as compared to the cloud offering, by reducing the cost of subscription by $9 per TB per month.
What’s better than having HPC service at your disposal? When this service comes free for all researchers in NUS, whenever they require advanced computing resources to enable and accelerate their compute-intensive workloads.
Yet another free HPC service is provided to all researchers in NUS that require Graphics Processing Unit (GPU) computing resources to enable and accelerate their Artificial Intelligence (AI) / Machine Learning (ML) workloads. As it takes a lot of computational power to train deep learning models, do AI inferencing in real-time, and run analytics on massive amounts of data, GPUs offer a parallel architecture and high performance that speed up certain computing processes, especially those related to AI and ML models.