Rikky Purbojati, Research Computing, NUS IT
[ Disclaimer: this article has been written with the assistance and partial content generation of chatGPT ]
Artificial intelligence (AI) has been a popular topic of discussion for many decades, but with the advent of advanced algorithms and powerful computing systems, it has now become a reality. One of the most significant advancements in AI technology is the development of chatbots and language models like OpenAI’s ChatGPT. These systems have elevated people’s imagination beyond expectation and have the potential to revolutionise the way we interact with technology.
ChatGPT is a state-of-the-art language model that uses deep learning algorithms to generate human-like text responses. It is trained on a vast corpus of text data from the internet and can generate text on a wide range of topics with remarkable accuracy and fluency. The ability of ChatGPT to understand and generate human-like text has made it a popular tool for various applications, including customer service, content creation, and even as a language tutor.
One of the most remarkable things about ChatGPT is its ability to generate text that is not just grammatically correct but also semantically meaningful. This is a significant advancement over previous language models that could only generate text that was grammatically correct but lacked context and meaning. The ability of ChatGPT to understand the meaning of words and phrases and to generate text that is coherent and relevant has elevated people’s imagination beyond expectation.
The development of ChatGPT has important implications for AI research and the computing infrastructure. For AI researchers, it represents a major milestone in the development of natural language processing (NLP) and deep learning algorithms. The success of ChatGPT has inspired researchers to push the boundaries of what is possible with AI and has led to new developments in the field.
In terms of computing infrastructure, the development of ChatGPT requires significant computational resources. The model is trained on hundreds of GPUs and requires large amounts of memory and storage. The computing infrastructure required to support ChatGPT is a testament to the tremendous advancements in computing power in recent years.
Similarly, in other areas of research, this computing power has never been more essential in producing scientific breakthroughs to push the envelope of collective human knowledge. Taking cues from the advances of HPC/GPU architecture and technology that made ChatGPT possible, NUS IT has put forth a 5-years development plan as part of the IT Strategic Planning 2023-2027 to make this computing power available to NUS researchers.
In 2023, we have immediate plans to upgrade our GPU cluster with the latest generation of GPU accelerators, capable of performing efficient large-language-model-like computation. In addition, we are planning to refresh around 40% of our CPU cluster, replacing some of our older generation CPUs with the latest generation of high-density CPUs. , We have also conducted a requirement survey of server hosting from the research community in the past year, which some of you have participated in. Overall, we received good responses, and currently, we are assessing and working with our data centre team to conduct a feasibility study leading to a proposal of expanding NUS IT data centre to host research servers.
We are only just starting, so stay tuned and expect new and exciting technologies coming your way soon.
Happy reading!