Welcome to HPC!
As a new user, you’re probably eager to get started. Some of the first things you’ll want to know might include:
This introductory guide aims to address these questions, providing you with helpful links to get started smoothly.
At HPC, all systems are managed by the PBS Job Scheduler. Using PBS commands, you can submit jobs to various hosts, each with its own features and limitations. You can monitor your jobs in real-time and check the load and status of each host to determine the best queues and hosts (NUS internal link) for your needs. Before diving into your work, we recommend reviewing the PBS Usage Guide (NUS internal link, accessible via the HPC homepage) or checking the manual pages (by typing “man” followed by the command) to familiarize yourself with the commands, queues, and hosts in the PBS environment.
Remember that interactive and background jobs submitted with an ampersand (&) have a CPU limit of 30 minutes and will be terminated if they exceed this. For long-running, compute-intensive tasks, we recommend submitting batch jobs through PBS.
If you’re new to the Linux operating system, there are plenty of resources available to help you get up to speed. A quick internet search for “linux commands” will yield numerous guides and tutorials. We’ve also compiled a list of essential Linux commands, which you can download [here].
As you get acquainted with the system, you might encounter issues from time to time. Our HPC homepage is available to help guide you through troubleshooting. If you need further assistance, please use the nTouch platform to provide detailed descriptions of any issues, including error messages, so we can resolve them as efficiently as possible.
We hope you enjoy using HPC resources and look forward to supporting your work!
HPC resources are open to all NUS staff and students who have a valid NUS-ID with the need to access high-performance computing resources.
At HPC, we provides various kinds of compute servers and graphical workstations to meet the users’ requirements for different hardware platforms and operating systems. The compute servers are used to run time-consuming compute intensive jobs, while the graphical workstations can be used to do the pre- and post-processing works and graphical display.
The follows are different ways to access these hosts:
Use ssh plus the hostname/ip address to access the following hosts/clusters:
Hostname | Note |
atlas6-c01.nus.edu.sg | HP Xeon two sockets Hexa-Core 64-bit Linux cluster, CentOS 6 |
atlas7.nus.edu.sg | HP Xeon two sockets Hexa-Core 64-bit Linux cluster, CentOS 6 |
atlas8.nus.edu.sg | HP Xeon two sockets 12-Core 64-bit Linux cluster, CentOS 7.8 |
atlas9.nus.edu.sg | HP Xeon two sockets 20-Core 64-bit Linux cluster, CentOS 7.5 |
Internet Explorer/Firefox:
https://hpcportal.nus.edu.sg/
Download PuTTY or MobaXterm Home-Edition and install it on your PC and use them to access the HPC hosts.
Launch graphical applications on the powerful workstations and display on your desktop via the HPC Portal.
The user guide is available here.
Secure FTP is recommended for file transfer, please check the availability of scp or sftp.
• User’s home directory is mounted to U: drive when login to NUS domain. You can transfer files to your HPC home directory by drag and drop files in a Windows Explorer.
• If your computer is not joined to NUS domain, you can mount your HPC home directory (\\hpcnas.nus.edu.sg\svu\username) from a Windows Explorer only. If you are outside of campus, please use the method below for file transfer.
• Using SFTP in PuTTY, Filezilla, or other secure File Transfer Tool (instruction).
After login to any of above hosts, you will be able to access the following directories.
Directories | Feature | Disk Quota | Backup | Description |
---|---|---|---|---|
/home/svu/$USERID | Global | 20 GB | Snapshot | Home directory, U: drive on your PC. The longest snapshot backup is 10 days. |
/hpctmp | Local on All Atlas cluster | 500GB | No | Working directory. Files older than 60 days are purged automatically. |
Details and instructions on how to use the working directory /hpctmp and /hpctmp2 is available at the page of the High Performance Workspace for Computational Clusters.
• Type “hpc s” to check your disk quota for your home directory, use “df -h” command to check the free space left in a file system.
• Please do house-keeping work from time to time to make sure your disk quota is not exceeded, otherwise, you’ll have problems to access HPC hosts and run jobs.
• More disk quota is granted if there is a need, submit the service request form for your request.
Users are required to submit their time-consuming jobs to the compute servers through PBS batch queue system, instead of just submitting to the background with an ampersand (&) or running interactively in the foreground. Different hosts accept jobs from different batch queues. The following is a list and description of hosts and batch queues, for details, please check the PBS Guide on the Technique Information page.
HPC hosts are all running Linux operating systems, being familiar with Linux command will surely help you in your work. There are many ways to learn more about Linux:
• Linux books from library
You can find many books on Linux concepts and commands at the library.
• Web pages in the internet
Search the internet with key words like “Linux command”, you will get dozens of web pages listing the Linux commands.
• You can download the following command summary for your reference
The PBS Pro job scheduler is a powerful cluster management tool that enables users to make full use of the computing resources available in HPC. Some useful features that can be found in PBS job scheduler include:
• Ability to launch jobs from any system in HPC
• Job monitoring and management
• Recovery from system crashes
• Advance resource reservation
This tutorial will teach the user to perform the following operations using PBS:
• Checking status of HPC clusters and PBS queues
• Job Monitoring, Management and Termination
You can also check above instructions by entering command “hpc pbs help” after log into HPC cluster at anytime.
Remember to check the HPC home page for any information regarding HPC resources and services.
For problems or queries, please contact us through nTouch (search for “HPC Enquiries”)