hpc:hpc_clusters
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
hpc:hpc_clusters [2025/01/17 09:03] – [Key Rules and Details] Yann Sagon | hpc:hpc_clusters [2025/03/14 14:17] (current) – [CPUs on Baobab] Gaël Rossignol | ||
---|---|---|---|
Line 89: | Line 89: | ||
You can find the whole table that you can send to the FNS {{: | You can find the whole table that you can send to the FNS {{: | ||
+ | Users of a given PI are entitled to 100k CPU hours per year free of charge (per PI, not per user). See [[hpc: | ||
==== Cost of Renting a Compute Node ==== | ==== Cost of Renting a Compute Node ==== | ||
Line 102: | Line 102: | ||
Users are entitled to utilize up to 60% of the computational resources they own or rent within the cluster. For example, if you rent a compute node with 128 CPU cores for one year, you will receive a total credit of **128 (cores) × 24 (hours) × 365 (days) × 0.6 (max usage rate) = 672,768 core-hours**. This credit can be used across any of our three clusters -- Bamboo, Baobab, and Yggdrasil -- regardless of where the compute node was rented or purchased. | Users are entitled to utilize up to 60% of the computational resources they own or rent within the cluster. For example, if you rent a compute node with 128 CPU cores for one year, you will receive a total credit of **128 (cores) × 24 (hours) × 365 (days) × 0.6 (max usage rate) = 672,768 core-hours**. This credit can be used across any of our three clusters -- Bamboo, Baobab, and Yggdrasil -- regardless of where the compute node was rented or purchased. | ||
- | The key distinction when using your own resources is that you benefit from a higher scheduling priority, ensuring quicker access to computational resources. For more details, please | + | The main advantage is that you are not restricted to using your private nodes, but can access the three clusters and even the GPUs. |
+ | |||
+ | We are developing scripts to allow to check the usage and the amount of hours you have the right to use regarding the hardware your group owns. | ||
+ | |||
+ | The key distinction when using your own resources is that you benefit from a higher scheduling priority, ensuring quicker access to computational resources. | ||
+ | |||
+ | For more details, please contact the HPC support team. | ||
===== Purchasing or Renting Private Compute Nodes ===== | ===== Purchasing or Renting Private Compute Nodes ===== | ||
Line 131: | Line 138: | ||
* ~ 14' | * ~ 14' | ||
+ | |||
+ | * 2 x 96 Core AMD EPYC 9754 2.4GHz Processor | ||
+ | * 768GB DDR45 4800MHz Memory (24x32GB) | ||
+ | * 100G IB EDR card | ||
+ | * 960GB SSD | ||
+ | * ~ 16'464 CHF TTC | ||
+ | |||
+ | Key differences: | ||
+ | * + 9754 has higher memory performance of up to 460.8 GB/s vs 7763 which has 190.73 GB/s | ||
+ | * + 9754 has a bigger cache | ||
+ | * - 9754 is more expensive | ||
+ | * - power consumption is 400W for 9754 vs 240W for 7763 | ||
+ | * - 9754 is more difficult to cool as the inlet temperature for air colling must be 22° max | ||
=== GPU H100 with AMD=== | === GPU H100 with AMD=== | ||
Line 151: | Line 171: | ||
If you want to ask a financial contribution from UNIGE you must complete a COINF application : https:// | If you want to ask a financial contribution from UNIGE you must complete a COINF application : https:// | ||
+ | |||
+ | ====== Use Baobab for teaching ====== | ||
+ | |||
+ | Baobab, our HPC infrastructure, | ||
+ | |||
+ | Teachers can request access via [dw.unige.ch](final link to be added later, use hpc@unige.ch in the meantime), and once the request is fulfilled, a special account named < | ||
+ | |||
+ | A shared storage space can also be created optionally, accessible at ''/ | ||
+ | |||
+ | **All student usage is free of charge if they submit their job to the correct account**. | ||
+ | |||
+ | We strongly recommend that teachers use and promote our user-friendly web portal at [[hpc: | ||
+ | |||
====== How do I use your clusters ? ====== | ====== How do I use your clusters ? ====== | ||
Line 234: | Line 267: | ||
| V8 | EPYC-7742 | 2.25GHz | 128 cores| " | | V8 | EPYC-7742 | 2.25GHz | 128 cores| " | ||
| V10 | EPYC-72F3 | 3.7GHz | | V10 | EPYC-72F3 | 3.7GHz | ||
+ | | V10 | EPYC-7763 | 2.45GHz | 128 cores| " | ||
| V8 | EPYC-7302P| 3.0GHz | | V8 | EPYC-7302P| 3.0GHz | ||
=== GPUs on Bamboo === | === GPUs on Bamboo === | ||
Line 252: | Line 286: | ||
| V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | ||
| V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | ||
- | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | + | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" |
- | | V3 | E5-2670V0 | 2.60GHz | 16 cores | "Sandy Bridge-EP" | + | | V3 | E5-2670V0 | 2.60GHz | 16 cores | "Sandy Bridge-EP" |
- | | V3 | E5-4640V0 | 2.40GHz | 32 cores | "Sandy Bridge-EP" | + | | V3 | E5-4640V0 | 2.40GHz | 32 cores | "Sandy Bridge-EP" |
- | | V4 | E5-2650V2 | 2.60GHz | 16 cores | "Ivy Bridge-EP" | + | | V4 | E5-2650V2 | 2.60GHz | 16 cores | "Ivy Bridge-EP" |
- | | V5 | E5-2643V3 | 3.40GHz | 12 cores | " | + | | V5 | E5-2643V3 | 3.40GHz | 12 cores | " |
- | | V6 | E5-2630V4 | 2.20GHz | 20 cores | " | + | | V6 | E5-2630V4 | 2.20GHz | 20 cores | " |
| V6 | E5-2637V4 | 3.50GHz | 8 cores | " | | V6 | E5-2637V4 | 3.50GHz | 8 cores | " | ||
| V6 | E5-2643V4 | 3.40GHz | 12 cores | " | | V6 | E5-2643V4 | 3.40GHz | 12 cores | " | ||
Line 263: | Line 297: | ||
| V7 | EPYC-7601 | 2.20GHz | 64 cores | " | | V7 | EPYC-7601 | 2.20GHz | 64 cores | " | ||
| V8 | EPYC-7742 | 2.25GHz | 128 cores| " | | V8 | EPYC-7742 | 2.25GHz | 128 cores| " | ||
- | | V9 | GOLD-6240 | 2.60GHz | 36 cores | " | + | | V9 | GOLD-6240 | 2.60GHz | 36 cores | " |
- | | V10 | + | | V10 | EPYC-7763 | 2.45GHz | 128 cores| " |
- | | V11 | + | | V11 | EPYC-9554 | 3.10GHz | 128 cores| " |
hpc/hpc_clusters.1737104598.txt.gz · Last modified: 2025/01/17 09:03 by Yann Sagon