bwUniCluster¶
Overview¶
bwUniCluster 3.0 is a Tier-3, heterogeneous regional cluster with NVIDIA and AMD GPUs, available to the University of Stuttgart for general purpose and teaching. For research purposes, consider using bwForCluster instead, which has more resources.
See cluster status page for outage notifications.
See the bwUniCluster3.0/Hardware and Architecture page for more information.
Compute node |
Nodes |
Sockets |
Cores |
Clock speed |
RAM |
Local SSD |
Bus |
Accelerators |
VRAM |
Interconnect |
|---|---|---|---|---|---|---|---|---|---|---|
IceLake |
272 |
2 |
64 |
2.6 GHz |
256 GB |
1.8 TB |
NVMe |
- |
- |
IB HDR200 |
IceLake GPU x4 |
15 |
2 |
64 |
2.6 GHz |
512 GB |
6.4 TB |
NVMe |
4x A100 / H100 |
80/94 GB |
IB 2x HDR200 |
Standard |
70 |
2 |
96 |
2.75 GHz |
384 GB |
3.84 TB |
NVMe |
- |
- |
IB 2x NDR200 |
High Memory |
4 |
2 |
96 |
2.75 GHz |
2.3 TB |
15.4 TB |
NVMe |
- |
- |
IB 2x NDR200 |
GPU NVIDIA x4 |
12 |
4 |
96 |
2.75 GHz |
768 GB |
15.4 TB |
NVMe |
4x H100 |
94 GB |
IB 4x NDR200 |
GPU AMD x4 |
1 |
4 |
96 |
3.7 GHz |
512 GB |
7.68 TB |
NVMe |
4x MI300A |
128 GB |
IB 2x NDR200 |
Login |
2 |
2 |
96 |
2.75 GHz |
384 GB |
7.68 TB |
SATA |
- |
- |
IB 1x NDR200 |
Partitions and nodes¶
This cluster uses queues instead of partitions.
dev_* partitions are only used for development, i.e. debugging or performance optimization.
Queue |
Compute node |
Default resources |
Minimal resources |
Maximum resources |
|---|---|---|---|---|
|
Ice Lake |
mem-per-cpu=2gb |
time=72:00:00, nodes=30, mem=249600mb, ntasks-per-node=64, (threads-per-core=2) |
|
|
Standard |
mem-per-cpu=2gb |
time=72:00:00, nodes=20, mem=380gb, ntasks-per-node=96, (threads-per-core=2) |
|
|
High Memory |
mem-per-cpu=12090mb |
mem=380gb |
time=72:00:00, nodes=4, mem=2300gb, ntasks-per-node=96, (threads-per-core=2) |
|
GPU NVIDIA x4 |
mem-per-gpu=193300mb cpus-per-gpu=24 |
gres=gpu:1 |
time=72:00:00, nodes=12, mem=760gb, ntasks-per-node=96, (threads-per-core=2) |
|
GPU AMD x4 |
mem-per-gpu=128200mb cpus-per-gpu=24 |
gres=gpu:1 |
time=72:00:00, nodes=1, mem=510gb, ntasks-per-node=40, (threads-per-core=2) |
|
Ice Lake GPU x4 |
mem-per-gpu=127500mb cpus-per-gpu=16 |
gres=gpu:1 |
time=48:00:00, nodes=9(A100)/nodes=5(H100), mem=510gb, ntasks-per-node=64, (threads-per-core=2) |
|
Ice Lake GPU x4 |
mem-per-gpu=94gb cpus-per-gpu=12 |
gres=gpu:1 |
time=30, nodes=12, mem=376gb, ntasks-per-node=48, (threads-per-core=2) |
|
IceLake |
mem-per-cpu=2gb |
time=30, nodes=8, mem=249600mb, ntasks-per-node=64, (threads-per-core=2) |
|
|
Standard |
mem-per-cpu=2gb |
time=30, nodes=1, mem=380gb, ntasks-per-node=96, (threads-per-core=2) |
|
|
GPU NVIDIA x4 |
mem-per-gpu=193300mb cpus-per-gpu=24 |
gres=gpu:1 |
time=30, nodes=1, mem=760gb, ntasks-per-node=96, (threads-per-core=2) |
|
GPU NVIDIA x4 |
mem-per-gpu=127500mb cpus-per-gpu=16 |
gres=gpu:1 |
time=30, nodes=1, mem=510gb, ntasks-per-node=64, (threads-per-core=2) |
Source: bwUniCluster3.0/Queues on bwUniCluster 3.0.
GPU data sheets:
NVIDIA Tesla A100 (80 GB HBM2e, sm_80)
NVIDIA Tesla H100 (94 GB HBM, sm_90)
AMD Instinct MI300A (128 GB HBM3)
Filesystems¶
Access¶
For University of Stuttgart personnel, applications are processed by the HLRS. Follow the instructions outlined in the HLRS page bwUniCluster Access. You need to communicate your personal information and write a short abstract of your research project or teaching project. Once your application is approved, you will need to register an account at KIT and fill out a questionnaire. The review phase takes a few working days.
Be advised that entitlements are time-limited: 1 year for students, or contract
end date for academic staff. No reminder will be sent before entitlements are
revoked by TIK. Students need to ask for an extension before the cutoff date.
Academic staff whose contract gets renewed need to ask for an extension before the
end date of the old contract (in the e-mail, mention the new contract end date).
To check your entitlements, log into bwIDM,
open the “Shibboleth” tab and look for http://bwidm.de/entitlement/bwUniCluster.
Afterwards, create an account on the bwHPC service by following the instructions in Registration/bwUniCluster. You need 2FA, and SMS are not an option. If you don’t have a YubiKey or a device capable of managing software tokens, it is possible to use the KeePassXC software instead (see TOTP).
Once access is granted, refer to the bwUniCluster3.0 user documentation. See also Using bwUniCluster for building software and submitting jobs.
Obligations¶
Use of the cluster must be acknowledged in scientific publications. Citation details of these publications must be communicated to the bwHPC-S5 project (publications@bwhpc.de). The following formulation can be used:
This work was performed on the computational resource bwUniCluster funded by the Ministry of Science, Research and the Arts Baden-Württemberg and the Universities of the State of Baden-Württemberg, Germany, within the framework program bwHPC.
For details, refer to bwUniCluster3.0/Acknowledgement.
Publications¶
[Kuron et al., 2019]: ICP ESPResSo simulations on bwUniCluster
[Zeman et al., 2021]: ICP GROMACS simulations on Hazel Hen with support from bwHPC