Overview
The University of Manitoba’s Information & Services Technology (IST) department manages several research computing environments. The CC Compute Cluster is available to anyone who has a UM computer account.
Features/benefits
This compute environment consists of 15 Dell servers arranged as follows:
- 4 login servers: accessed as ccl.cc.umanitoba.ca, you will be connected, at random, to one of the servers, named after planets: Mercury, Venus, Gaia, or Mars.
- 11 compute servers: d1l-cc01 through d1l-cc11.cc.umanitoba.ca. Also available without the prefix (ie, cc01 – cc11).
We ask users to not run their compute-intensive jobs on the login servers. We reserve the right to terminate any such jobs with little/no notice. Please see the computationally-intensive processes policy for more information.
Each compute server has:
- 80 CPU threads (2 x 20-core Intel Xeon Cascade Lake CPUs with Hyper-Threading)
- 1024 GB RAM (DDR4-2933)
- 10 Gbit/s networking
Each login server has:
- 40 CPU threads (2 x 10-core Intel Xeon Haswell CPUs with Hyper-Threading)
- 768 GB RAM (DDR4-2133)
There is limited local disk space; storage is provided via NFS. Users get a small default allocation of storage, but we will work directly with individuals or research groups to try to accommodate their storage needs. If you need additional storage accommodations, contact the IST Service Desk with a request for the Unix team.
How does it work?
Access
The compute service is available to all employees and students at UM, but you require a valid UMNetID with the Unix/Linux entitlement claimed. If you already have your UMNetID, you can check if you have the entitlement by trying to log into one of the login servers.
If you require help getting your account, or getting it set up for this access, you can contact the IST Service Desk explaining that you would like your account to have the Unix/Linux entitlement set. The Service Desk will also have information about sponsoring accounts should that be necessary.
- Refer to Accessing CC Unix/Linux for information on the various ways of accessing the general CCL environment.
Batch job management
For researchers familiar with HPC resources: We do not presently have a batch/job scheduler. You need to login to the compute nodes, as described above, to launch your jobs.
You can use the “supcc” command to show the current load of the available cluster nodes and the command “sshcc” to ssh to the most lightly loaded compute node.
We do hope to eventually add a job scheduler to our compute cluster to handle resource contention, which we currently handle according to the Computationally-Intensive Processes policy.
More information
If you have additional questions/feedback, please don’t hesitate to ask. Create an IT Ticket or contact the IST Service Desk with an info request for the Unix team.
- General information on CC Unix/Linux
- Information on quotas and workspaces
- More information on ThinLinc
- Listing of CC Unix/Linux servers and their current status
- Policy on Computationally-Intensive Processes
Other Research Computing resources available:
0 Comments
Add your comment