Node Type Total nodes Slots Per Node Additional Resource Memory Per Node Node Name Prefix Total Cores Login 1 20 none 128GB Turing 20 Standard Compute 220 16-32 none 128GB coreV1-22-###
coreV2-22-###
coreV2-25-###
coreV3-23-###
coreV4-21-###5456 GPU 16 28-32 Nvidia K40 GPU
Nvidia K80 GPU
Nvidia P100 GPU
Nvidia V100 GPU128GB coreV3-23-k40-###
coreV4-21-k80-###
coreV4-22-p100-###
coreV4-24-v100-###512 Xeon Phi 10 20 Intel 2250 Xeon Phi MIC's128GB coreV2-25-knc-### 200 High Memory 7 32 none 512GB-768GB coreV2-23-himem-###
coreV4-21-himem-###224
Intel Xeon Phi Co-processor is no longer supported, Phi parititon still exists for compatibility purpose, it is merged with main parititon already. You should not submit job to it in your new job script.
The partition on Turing is defined by machine type.
Name | Use case | Slurm Submission Options |
---|---|---|
main | no specialized hardware requirement require less than 128G memory per node |
-p main or nothing (all job by default submit to main) |
himem | require more than 128G memory per node | -p himem |
gpu | require GPU accelerator | -p gpu |
--gres gpu:1
salloc -p gpu --gres gpu:1 -c 4
In job script:#SBATCH -p gpu
#SBATCH --gres gpu:1
#SBATCH -c 4
In addition, there are timed partition for each partition above. They are timed-main, timed-himem, and timed-gpu
. The differences between the timed partition and their non-timed counterparts are listed below:
Any job can execute on timed partition for a maximum of 2 hours
It will be terminated after 2 hours, thus timed partition are best for short jobs
Any job submitted to timed partition has high priority to start its execution
We believe that if your job is very short, you should not wait for long
Timed partition are larger than non-timed partition
Our investors kindly lend their resources for community use. They agree to let all Turing users run short tasks on their hardware.
To submit a job to a timed partition, the following options in your submission are mandatory:
-p timed-main|timed-himem|timed-gpu # pick one
Turing has several mounted storage resources available:
Mount Point | Purpose | Backup |
---|---|---|
/home |
User Home Directory, primary storage for user | yes, once per day |
/RC |
Archive and long term storage, only available to login node | yes, once per day |
/scratch-lustre |
Scratch Space (fast) | no |
Compute Platform | # | Hosts | CPU | Memory | Special Features |
---|---|---|---|---|---|
General Purpose | 30 | coreV1-22-### | Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz (16 slots) | 128G | AVX1 |
40 | coreV2-22-### | Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz (20 slots) | 128G | AVX1 | |
80 | coreV2-25-### | Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz (20 slots) | 128G | AVX1 | |
50 | coreV3-23-### | Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz (32 slots) | 128G | AVX2 | |
30 | coreV4-21-### | Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (32 slots) | 128G | AVX2 | |
High Memory | 4 | coreV2-23-himem-### | Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz (32 slots) | 768G | AVX1 |
3 | coreV4-21-himem-### | Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (32 slots) | 512G | AVX2 | |
GPU | 10 | coreV3-23-k40-### | Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz (28 slots) | 128G | AVX2, Tesla™ K40m |
5 | coreV4-21-k80-### | Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz (32 slots) | 128G | AVX2, Tesla™ K80m | |
2 | coreV4-22-p100-### | Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz (24 slots) | 128G | AVX2, Tesla™ P100 | |
4 | coreV4-24-v100-### | Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz (28 slots) | 128G | AVX2, Tesla™ V100 | |
MIC | 10 | coreV2-25-knc-### | Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz (20 slots) | 128G | AVX1 8GB, 1.053 GHz, 60 core |