User Tools

Site Tools


hpc:hpc_clusters

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:hpc_clusters [2025/11/06 15:30] – [CPUs on Baobab] Yann Sagonhpc:hpc_clusters [2025/12/16 17:04] (current) – [Cost model] Yann Sagon
Line 82: Line 82:
   * **Purchase or rent** compute nodes for more intensive workloads.   * **Purchase or rent** compute nodes for more intensive workloads.
  
-You can as well find a summary of how this model is implemented yethttps://hpc-community.unige.ch/t/hpc-accounting-summary/4056+ 
 +**Summary:** 
 + 
 +  * Starting this year, you receive a **CPU hours credit** based on the hardware you own (if any) in the cluster (private partition). 
 +  * You can find instructions on how to check your annual credit here[[accounting#resources_available_for_research_group|Resources Available for Research Groups]]If you know your research group has bought some compute nodes but your PI doesn'appear in the report, please contact us. 
 +  * The credit calculation in the provided script assumes a **5-year hardware ownership period**. However, **if** this policy was introduced after your compute nodes were purchased, we have extended the production duration by two years. 
 +  * To ensure **flexibility and simplicity**, we have standardized resource usage by converting CPU Memory, and GPU hours into CPU hours, using different conversion ratios depending on the GPU type. More details can be found here: [[accounting#resource_accounting_uniformization|Resource Accounting Uniformization]]. 
 +  * You can use your credit across all three clusters (**Baobab, Yggdrasil, and Bamboo**), not just on your private compute nodes. However, when using your own compute nodes, you will receive a **higher priority**. 
 +  * To check your group's current resource usage, visit: [[accounting#report_and_statistics_with_sreport|Report and Statistics with sreport]].
 ==== Price per hour ==== ==== Price per hour ====
 <WRAP center round important 60%> <WRAP center round important 60%>
Line 98: Line 106:
  
  
 +=== Progressive Pricing for HPC Compute Hours ===
 +A tiered pricing model applies to compute hour billing. Discounts increase as usage grows: once you reach 200K, 500K, and 1,000K compute hours, an additional 10% reduction is applied at each threshold. This ensures cost efficiency for large-scale workloads.
  
 +^ Usage (Compute Hours) ^ Discount Applied ^
 +| 0 – 199,999           | Base Rate       |
 +| 200,000 – 499,999     | Base Rate -10%  |
 +| 500,000 – 999,999     | Base Rate -20%  |
 +| 1,000,000+            | Base Rate -30%  |
 ===== Purchasing or Renting Private Compute Nodes ===== ===== Purchasing or Renting Private Compute Nodes =====
  
Line 193: Line 208:
 We usually install and order the nodes twice per year. We usually install and order the nodes twice per year.
  
-If you want to ask a financial contribution from UNIGE you must complete a COINF application : https://www.unige.ch/rectorat/commissions/coinf/appel-a-projets +If you want to ask a financial contribution from UNIGE you must complete submit request to the [[https://www.unige.ch/rectorat/commissions/coinf/appel-a-projets 
 +|COINF]].
 ====== Use Baobab for teaching ====== ====== Use Baobab for teaching ======
  
Line 233: Line 248:
  
 Both clusters contain a mix of "public" nodes provided by the University of Geneva, a "private" nodes in  Both clusters contain a mix of "public" nodes provided by the University of Geneva, a "private" nodes in 
-general paid 50% by the University and 50% by a research group for instance. Any user of the clusters can +general funded 50% by the University through the [[https://www.unige.ch/rectorat/commissions/coinf/appel-a-projets 
 +|COINF]] and 50% by a research group for instance. Any user of the clusters can 
 request compute resources on any node (public and private), but a research group who owns "private" nodes has  request compute resources on any node (public and private), but a research group who owns "private" nodes has 
 a higher priority on its "private" nodes and can request a longer execution time. a higher priority on its "private" nodes and can request a longer execution time.
Line 314: Line 330:
  
 ^ Generation ^ Model        ^ Freq    ^ Nb cores ^ Architecture               ^ Nodes                                             ^Extra flag      ^ Status                       | ^ Generation ^ Model        ^ Freq    ^ Nb cores ^ Architecture               ^ Nodes                                             ^Extra flag      ^ Status                       |
-| V2         | X5650        | 2.67GHz | 12 cores | "Westmere-EP" (32 nm)      | cpu[093-101,103-111,140-153                                      | decommissioned               | 
-| V3         | E5-2660V0    | 2.20GHz | 16 cores | "Sandy Bridge-EP" (32 nm)  | cpu[009-010,012-018,020-025,029-044]              |                | decommissioned in 2023       | 
-| V3         | E5-2660V0    | 2.20GHz | 16 cores | "Sandy Bridge-EP" (32 nm)  | cpu[011,019,026-028,042]                          |                | decommissioned in 2024       | 
-| V3         | E5-2660V0    | 2.20GHz | 16 cores | "Sandy Bridge-EP" (32 nm)  | cpu[001-005,007-008,045-056,058]                  |                | decommissioned in 2024       | 
-| V3         | E5-2670V0    | 2.60GHz | 16 cores | "Sandy Bridge-EP" (32 nm)  | cpu[059,061-062]                                  |                | decommissioned in 2024       | 
-| V3         | E5-4640V0    | 2.40GHz | 32 cores | "Sandy Bridge-EP" (32 nm)  | cpu[186]                                          |                | decommissioned in 2024       | 
-| V4         | E5-2650V2    | 2.60GHz | 16 cores | "Ivy Bridge-EP" (22 nm)    | cpu[063-066,154-172]                              |                | decommissioned in 2025       | 
 | V5         | E5-2643V3    | 3.40GHz | 12 cores | "Haswell-EP" (22 nm)       | gpu[002]                                          |                | on prod                      | | V5         | E5-2643V3    | 3.40GHz | 12 cores | "Haswell-EP" (22 nm)       | gpu[002]                                          |                | on prod                      |
 | V6         | E5-2630V4    | 2.20GHz | 20 cores | "Broadwell-EP" (14 nm)     | cpu[173-185,187-201,205-213,220-229,237-264],gpu[004-009]|         | on prod                      |  | V6         | E5-2630V4    | 2.20GHz | 20 cores | "Broadwell-EP" (14 nm)     | cpu[173-185,187-201,205-213,220-229,237-264],gpu[004-009]|         | on prod                      | 
Line 363: Line 372:
 | Titan X     | Pascal       | 12GB  | 6.1               | nvidia_titan_x             | titan                | 8         | gpu[009-010]     | | Titan X     | Pascal       | 12GB  | 6.1               | nvidia_titan_x             | titan                | 8         | gpu[009-010]     |
 | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 2         | gpu[011]         | | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 2         | gpu[011]         |
-| RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 8         | gpu[012,015]     |+| RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 8         | gpu[015]         |
 | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 8         | gpu[013,016]     | | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 8         | gpu[013,016]     |
 | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 4         | gpu[018-019]     | | RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 4         | gpu[018-019]     |
hpc/hpc_clusters.1762443042.txt.gz · Last modified: by Yann Sagon