| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| hpc:accounting [2025/08/21 13:14] – [Report and statistics with sreport] Yann Sagon | hpc:accounting [2025/12/04 10:43] (current) – [Resources available for research group] Yann Sagon |
|---|
| ===== Resource accounting uniformization ===== | ===== Resource accounting uniformization ===== |
| |
| We charge usage uniformly by converting GPU hours and memory usage into CPU hour equivalents, leveraging the [[https://slurm.schedmd.com/tres.html|TRESBillingWeights]] functionality provided by SLURM. | We apply uniform resource accounting by converting GPU hours and memory usage into CPU-hour equivalents, using the [[https://slurm.schedmd.com/tres.html|TRESBillingWeights]] feature provided by SLURM. |
| | A CPU hour represents one hour of processing time on a single CPU core. |
| |
| A CPU hour represents one hour of processing time by a single CPU core. | We use this model because our cluster is heterogeneous, and both the computational power and the cost of GPUs vary significantly depending on the model. To ensure fairness and transparency, each GPU type is assigned a weight that reflects its relative performance compared to a CPU core. Similarly, memory usage is converted into CPU-hour equivalents based on predefined weights. |
| |
| For GPUs, SLURM assigns a conversion factor to each GPU model through TRESBillingWeights (see below the conversion table), reflecting its computational performance relative to a CPU. Similarly, memory usage is also converted into CPU hour equivalents based on predefined weights, ensuring that jobs consuming significant memory resources are accounted for fairly. | We also bill memory usage because some jobs consume very little CPU but require large amounts of memory, which means an entire compute node is occupied. This ensures that jobs using significant memory resources are accounted for fairly. |
| |
| For example, a job using a GPU with a weight of 10 for 2 hours and memory equivalent to 5 CPU hours would be billed as 25 CPU hours. This approach ensures consistent, transparent, and fair resource accounting across all heterogeneous components of the cluster. | Example: A job using a GPU with a weight of 10 for 2 hours and memory equivalent to 5 CPU hours would be billed as 25 CPU hours. This approach guarantees consistent, transparent, and fair resource accounting across all heterogeneous components of the cluster. |
| |
| You can see the detail of the conversion by looking at the parameter of a random partition on any of the clusters. We are using the same conversion table everywhere. | You can check the up to date conversion details by inspecting the parameters of any partition on the clusters. The same conversion table is applied everywhere. |
| |
| <code> | <code> |
| ===== Resources available for research group ===== | ===== Resources available for research group ===== |
| |
| | Research groups that have invested in the HPC cluster by purchasing private CPU or GPU nodes benefit from **high-priority access** to these resources. |
| |
| | Although these nodes remain available to all users, owners receive **priority scheduling** and a predefined annual allocation of compute hours, referred to as [[accounting#resource_accounting_uniformization|billings]]. |
| | The advantage of this approach is flexibility: you are free to use any resource on any cluster, rather than being restricted to your own nodes. When doing so, your billings will be consumed. |
| |
| Research groups that have invested in the HPC cluster by purchasing private CPU or GPU nodes benefit from high priority access to these resources. | To view details of owned resources, users can run the script: |
| | ''ug_getNodeCharacteristicsSummary.py'' |
| | This script provides a summary of the node characteristics within the cluster. |
| |
| While these nodes remain available to all users, owners receive priority scheduling and a designated number of included compute hours per year. | **Note:** This model ensures **fairness** across all users. Even if some groups own nodes, resources remain shared. Usage beyond the included billings will be **charged according to the standard accounting model**, ensuring equitable access for everyone. |
| |
| To check the details of their owned resources, users can run the script ''ug_getNodeCharacteristicsSummary.sh'', which provides a summary of the node characteristics within the cluster. | Output example of the script: |
| | |
| Example: | |
| <code> | <code> |
| ug_getNodeCharacteristicsSummary.sh --partitions private-<group>-gpu private-<group>-cpu --cluster <cluster> --summary | ug_getNodeCharacteristicsSummary.py --partitions private-<group>-gpu private-<group>-cpu --cluster <cluster> --summary |
| host sn cpu mem gpunumber gpudeleted gpumodel gpumemory purchasedate months remaining in prod. (Jan 2025) billing | host sn cpu mem gpunumber gpudeleted gpumodel gpumemory purchasedate months remaining in prod. (Jan 2025) billing |
| ------ ----------- ----- ----- ----------- ------------ -------------------------- ----------- -------------- -------------------------------------- --------- | ------ ----------- ----- ----- ----------- ------------ -------------------------- ----------- -------------- -------------------------------------- --------- |
| cpu084 N-20.02.151 36 187 0 0 0 2020-02-01 1 79 | cpu084 N-20.02.151 36 187 0 0 0 2020-02-01 1 79 |
| cpu085 N-20.02.152 36 187 0 0 0 2020-02-01 1 79 | [...] |
| cpu086 N-20.02.153 36 187 0 0 0 2020-02-01 1 79 | |
| cpu087 N-20.02.154 36 187 0 0 0 2020-02-01 1 79 | |
| cpu088 N-20.02.155 36 187 0 0 0 2020-02-01 1 79 | cpu088 N-20.02.155 36 187 0 0 0 2020-02-01 1 79 |
| cpu089 N-20.02.156 36 187 0 0 0 2020-02-01 1 79 | [...] |
| cpu090 N-20.02.157 36 187 0 0 0 2020-02-01 1 79 | |
| cpu209 N-17.12.104 20 94 0 0 0 2017-12-01 0 41 | |
| cpu210 N-17.12.105 20 94 0 0 0 2017-12-01 0 41 | |
| cpu211 N-17.12.106 20 94 0 0 0 2017-12-01 0 41 | |
| cpu212 N-17.12.107 20 94 0 0 0 2017-12-01 0 41 | |
| cpu213 N-17.12.108 20 94 0 0 0 2017-12-01 0 41 | |
| cpu226 N-19.01.161 20 94 0 0 0 2019-01-01 0 41 | cpu226 N-19.01.161 20 94 0 0 0 2019-01-01 0 41 |
| cpu227 N-19.01.162 20 94 0 0 0 2019-01-01 0 41 | [...] |
| cpu228 N-19.01.163 20 94 0 0 0 2019-01-01 0 41 | |
| cpu229 N-19.01.164 20 94 0 0 0 2019-01-01 0 41 | cpu229 N-19.01.164 20 94 0 0 0 2019-01-01 0 41 |
| cpu277 N-20.11.131 128 503 0 0 0 2020-11-01 10 251 | cpu277 N-20.11.131 128 503 0 0 0 2020-11-01 10 251 |
| Openxdmod is integrated into our SI. When you connect to it, you'll get the profile "user" and the data are filtered by your user by default. If you are a PI, you can ask us to change your profile to be PI. | Openxdmod is integrated into our SI. When you connect to it, you'll get the profile "user" and the data are filtered by your user by default. If you are a PI, you can ask us to change your profile to be PI. |
| |
| | <note important>OpenXDMoD currently supports only CPUh and GPUh metrics, not the [[accounting#resource_accounting_uniformization|billing]] metrics (yet?). For this reason, you need to use [[accounting#report_and_statistics_with_sreport|sreport or our script]] if you want to view the billed metrics.</note> |
| ==== sacct ==== | ==== sacct ==== |
| You can see your job history using ''sacct'': | You can see your job history using ''sacct'': |
| |
| |
| | Usage example to see the resource usage from the beginning of 2025 for all the PIs and associate users of the group private_xxx. The group private_xxx owns several compute nodes: |
| | <code> |
| | (baobab)-[sagon@login1 ~]$ ug_slurm_usage_per_user.py --group private_xxx --start=2025-01-01 --report_type=account |
| | -------------------------------------------------------------------------------- |
| |
| | Cluster/Account/User Utilization 2025-01-01T00:00:00 - 2025-08-21T14:59:59 (20095200 secs) |
| | |
| | Usage reported in TRES Hours |
| | |
| | -------------------------------------------------------------------------------- |
| | |
| | Cluster Login Proper Name Account TRES Name Used |
| | --------- ------- ------------- --------- ----------- ------- |
| | baobab PI1 billing 56134 |
| | yggdrasil PI1 billing 105817 |
| | bamboo PI2 billing 5416 |
| | baobab PI2 billing 1517001 |
| | yggdrasil PI2 billing 23775 |
| | bamboo PI3 billing 0 |
| | baobab PI3 billing 1687963 |
| | yggdrasil PI3 billing 1344599 |
| | [...] |
| | Total usage: 7.36M |
| | </code> |
| |
| === sreport examples === | === sreport examples === |