hpc:hpc_clusters
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| hpc:hpc_clusters [2025/10/03 12:58] – [Key Rules and Details] Yann Sagon | hpc:hpc_clusters [2025/12/16 17:04] (current) – [Cost model] Yann Sagon | ||
|---|---|---|---|
| Line 82: | Line 82: | ||
| * **Purchase or rent** compute nodes for more intensive workloads. | * **Purchase or rent** compute nodes for more intensive workloads. | ||
| - | You can as well find a summary of how this model is implemented yet: https:// | + | |
| + | **Summary: | ||
| + | |||
| + | * Starting this year, you receive a **CPU hours credit** based on the hardware you own (if any) in the cluster (private partition). | ||
| + | * You can find instructions on how to check your annual credit here: [[accounting# | ||
| + | * The credit calculation in the provided script assumes a **5-year hardware ownership period**. However, **if** this policy was introduced after your compute nodes were purchased, we have extended the production duration by two years. | ||
| + | * To ensure **flexibility and simplicity**, | ||
| + | * You can use your credit across all three clusters (**Baobab, Yggdrasil, and Bamboo**), not just on your private compute nodes. However, when using your own compute nodes, you will receive a **higher priority**. | ||
| + | * To check your group' | ||
| ==== Price per hour ==== | ==== Price per hour ==== | ||
| <WRAP center round important 60%> | <WRAP center round important 60%> | ||
| Line 98: | Line 106: | ||
| + | === Progressive Pricing for HPC Compute Hours === | ||
| + | A tiered pricing model applies to compute hour billing. Discounts increase as usage grows: once you reach 200K, 500K, and 1,000K compute hours, an additional 10% reduction is applied at each threshold. This ensures cost efficiency for large-scale workloads. | ||
| + | ^ Usage (Compute Hours) ^ Discount Applied ^ | ||
| + | | 0 – 199, | ||
| + | | 200,000 – 499, | ||
| + | | 500,000 – 999, | ||
| + | | 1, | ||
| ===== Purchasing or Renting Private Compute Nodes ===== | ===== Purchasing or Renting Private Compute Nodes ===== | ||
| Line 193: | Line 208: | ||
| We usually install and order the nodes twice per year. | We usually install and order the nodes twice per year. | ||
| - | If you want to ask a financial contribution from UNIGE you must complete a COINF application : https:// | + | If you want to ask a financial contribution from UNIGE you must complete |
| + | |COINF]]. | ||
| ====== Use Baobab for teaching ====== | ====== Use Baobab for teaching ====== | ||
| Line 233: | Line 248: | ||
| Both clusters contain a mix of " | Both clusters contain a mix of " | ||
| - | general | + | general |
| + | |COINF]] | ||
| request compute resources on any node (public and private), but a research group who owns " | request compute resources on any node (public and private), but a research group who owns " | ||
| a higher priority on its " | a higher priority on its " | ||
| Line 313: | Line 329: | ||
| Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table. | Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table. | ||
| - | ^ Generation ^ Model | + | ^ Generation ^ Model ^ Freq ^ Nb cores ^ Architecture |
| - | | V2 | X5650 | 2.67GHz | 12 cores | " | + | | V5 | E5-2643V3 |
| - | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | + | | V6 | E5-2630V4 |
| - | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | + | | V6 | E5-2637V4 |
| - | | V3 | E5-2660V0 | 2.20GHz | 16 cores | "Sandy Bridge-EP" | + | | V6 | E5-2643V4 |
| - | | V3 | E5-2670V0 | 2.60GHz | 16 cores | "Sandy Bridge-EP" | + | | V6 | E5-2680V4 |
| - | | V3 | E5-4640V0 | 2.40GHz | 32 cores | "Sandy Bridge-EP" | + | | V7 | EPYC-7601 |
| - | | V4 | E5-2650V2 | 2.60GHz | 16 cores | "Ivy Bridge-EP" | + | | V8 | EPYC-7742 |
| - | | V5 | E5-2643V3 | 3.40GHz | 12 cores | " | + | | V9 | SILVER-4210R | 2.60GHz | 36 cores | " |
| - | | V6 | E5-2630V4 | 2.20GHz | 20 cores | " | + | | V9 | GOLD-6240 |
| - | | V6 | E5-2637V4 | 3.50GHz | 8 cores | " | + | | V9 | GOLD-6244 |
| - | | V6 | E5-2643V4 | 3.40GHz | 12 cores | " | + | | V10 | EPYC-7763 |
| - | | V6 | E5-2680V4 | 2.40GHz | 28 cores | " | + | | V11 | EPYC-9554 |
| - | | V7 | EPYC-7601 | 2.20GHz | 64 cores | " | + | | V12 | EPYC-9654 |
| - | | V8 | EPYC-7742 | 2.25GHz | 128 cores| " | + | | V12 | EPYC-9654 |
| - | | V9 | GOLD-6240 | 2.60GHz | 36 cores | " | + | |
| - | | V9 | GOLD-6244 | 3.60GHz | 16 cores | “Intel Xeon Gold 6244 CPU” | cpu[351] | + | |
| - | | V10 | EPYC-7763 | 2.45GHz | 128 cores| " | + | |
| - | | V11 | EPYC-9554 | 3.10GHz | 128 cores| " | + | |
| - | | V12 | EPYC-9654 | 3.70GHz | 192 cores| " | + | |
| - | | V12 | EPYC-9654 | 3.70GHz | 96 cores | " | + | |
| The " | The " | ||
| Line 362: | Line 372: | ||
| | Titan X | Pascal | | Titan X | Pascal | ||
| | RTX 2080 Ti | Turing | | RTX 2080 Ti | Turing | ||
| - | | RTX 2080 Ti | Turing | + | | RTX 2080 Ti | Turing |
| - | | RTX 2080 Ti | Turing | + | | RTX 2080 Ti | Turing |
| | RTX 2080 Ti | Turing | | RTX 2080 Ti | Turing | ||
| | RTX 3090 | Ampere | | RTX 3090 | Ampere | ||
hpc/hpc_clusters.1759496282.txt.gz · Last modified: by Yann Sagon