hpc:hpc_clusters
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| hpc:hpc_clusters [2025/11/07 09:13] – [Free CPU Hour Allocation] Yann Sagon | hpc:hpc_clusters [2025/12/16 17:04] (current) – [Cost model] Yann Sagon | ||
|---|---|---|---|
| Line 82: | Line 82: | ||
| * **Purchase or rent** compute nodes for more intensive workloads. | * **Purchase or rent** compute nodes for more intensive workloads. | ||
| - | You can as well find a summary of how this model is implemented yet: https:// | + | |
| + | **Summary: | ||
| + | |||
| + | * Starting this year, you receive a **CPU hours credit** based on the hardware you own (if any) in the cluster (private partition). | ||
| + | * You can find instructions on how to check your annual credit here: [[accounting# | ||
| + | * The credit calculation in the provided script assumes a **5-year hardware ownership period**. However, **if** this policy was introduced after your compute nodes were purchased, we have extended the production duration by two years. | ||
| + | * To ensure **flexibility and simplicity**, | ||
| + | * You can use your credit across all three clusters (**Baobab, Yggdrasil, and Bamboo**), not just on your private compute nodes. However, when using your own compute nodes, you will receive a **higher priority**. | ||
| + | * To check your group' | ||
| ==== Price per hour ==== | ==== Price per hour ==== | ||
| <WRAP center round important 60%> | <WRAP center round important 60%> | ||
| Line 200: | Line 208: | ||
| We usually install and order the nodes twice per year. | We usually install and order the nodes twice per year. | ||
| - | If you want to ask a financial contribution from UNIGE you must complete a COINF application : https:// | + | If you want to ask a financial contribution from UNIGE you must complete |
| + | |COINF]]. | ||
| ====== Use Baobab for teaching ====== | ====== Use Baobab for teaching ====== | ||
| Line 240: | Line 248: | ||
| Both clusters contain a mix of " | Both clusters contain a mix of " | ||
| - | general | + | general |
| + | |COINF]] | ||
| request compute resources on any node (public and private), but a research group who owns " | request compute resources on any node (public and private), but a research group who owns " | ||
| a higher priority on its " | a higher priority on its " | ||
| Line 321: | Line 330: | ||
| ^ Generation ^ Model ^ Freq ^ Nb cores ^ Architecture | ^ Generation ^ Model ^ Freq ^ Nb cores ^ Architecture | ||
| - | | V2 | X5650 | 2.67GHz | 12 cores | " | ||
| - | | V3 | E5-2660V0 | ||
| - | | V3 | E5-2660V0 | ||
| - | | V3 | E5-2660V0 | ||
| - | | V3 | E5-2670V0 | ||
| - | | V3 | E5-4640V0 | ||
| - | | V4 | E5-2650V2 | ||
| | V5 | E5-2643V3 | | V5 | E5-2643V3 | ||
| | V6 | E5-2630V4 | | V6 | E5-2630V4 | ||
| Line 370: | Line 372: | ||
| | Titan X | Pascal | | Titan X | Pascal | ||
| | RTX 2080 Ti | Turing | | RTX 2080 Ti | Turing | ||
| - | | RTX 2080 Ti | Turing | + | | RTX 2080 Ti | Turing |
| | RTX 2080 Ti | Turing | | RTX 2080 Ti | Turing | ||
| | RTX 2080 Ti | Turing | | RTX 2080 Ti | Turing | ||
hpc/hpc_clusters.1762506815.txt.gz · Last modified: by Yann Sagon