User Tools

Site Tools


hpc:accounting

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:accounting [2025/12/04 09:37] – [OpenXDMoD] Yann Sagonhpc:accounting [2025/12/10 07:41] (current) – [sreport examples] Yann Sagon
Line 1: Line 1:
-{{METATOC 1-5}}+{{METATOC 1-8}}
 ====== Utilization and accounting ====== ====== Utilization and accounting ======
 When you submit jobs, they are using physical resources such as CPUs, Memory, Network, GPUs, Energy etc. We keep track of the usage of some of those resource. On this page we'll let you know how to consult your usage of the resource. We have several tools that you can use to consult your utilization: sacct, sreport, openxdmod When you submit jobs, they are using physical resources such as CPUs, Memory, Network, GPUs, Energy etc. We keep track of the usage of some of those resource. On this page we'll let you know how to consult your usage of the resource. We have several tools that you can use to consult your utilization: sacct, sreport, openxdmod
Line 13: Line 13:
 ===== Resource accounting uniformization ===== ===== Resource accounting uniformization =====
  
-We charge usage uniformly by converting GPU hours and memory usage into CPU hour equivalents, leveraging the [[https://slurm.schedmd.com/tres.html|TRESBillingWeights]] functionality provided by SLURM. +We apply uniform resource accounting by converting GPU hours and memory usage into CPU-hour equivalents, using the [[https://slurm.schedmd.com/tres.html|TRESBillingWeights]] feature provided by SLURM
 +A CPU hour represents one hour of processing time on a single CPU core.
  
-A CPU hour represents one hour of processing time by single CPU core. +We use this model because our cluster is heterogeneous, and both the computational power and the cost of GPUs vary significantly depending on the model. To ensure fairness and transparency, each GPU type is assigned a weight that reflects its relative performance compared to a CPU core. Similarly, memory usage is converted into CPU-hour equivalents based on predefined weights.
  
-For GPUs, SLURM assigns a conversion factor to each GPU model through TRESBillingWeights (see below the conversion table), reflecting its computational performance relative to a CPU. Similarly, memory usage is also converted into CPU hour equivalents based on predefined weightsensuring that jobs consuming significant memory resources are accounted for fairly. +We also **account for memory usage** because some jobs consume very little CPU but require large amounts of memorywhich means an entire compute node is occupied. This ensures that jobs using significant memory resources are accounted for fairly.
  
-For example, a job using a GPU with a weight of 10 for 2 hours and memory equivalent to 5 CPU hours would be billed as 25 CPU hours. This approach ensures consistent, transparent, and fair resource accounting across all heterogeneous components of the cluster. 
  
-You can see the detail of the conversion by looking at the parameter of a random partition on any of the clusters. We are using the same conversion table everywhere.+==== Conversion Rules extract (see below for details) ==== 
 +  * **1 CPU core = 1 CPUh per hour** 
 +  * **1 GB RAM = 0.25 CPUh per hour** 
 +  * **1 GPU A100 (40 GB) = 5 CPUh per hour** 
 + 
 +==== Example Calculation ==== 
 +Suppose you request: 
 +  * **2 CPUs** 
 +  * **20 GB RAM** 
 +  * **1 GPU A100** 
 + 
 +The cost per hour is calculated as: 
 +  * CPU: 2 × 1 CPUh = **2 CPUh** 
 +  * RAM: 20 GB × 0.25 CPUh = **5 CPUh** 
 +  * GPU: 1 × 5 CPUh = **5 CPUh** 
 + 
 +**Total per hour = 2 + 5 + 5 = 12 CPUh** 
 + 
 +This approach guarantees consistent, transparent, and fair resource accounting across all heterogeneous components of the cluster. 
 + 
 +You can check the up to date conversion details by inspecting the parameters of any partition on the clusters. The same conversion table is applied on all our clusters and partitions.
  
 <code> <code>
Line 45: Line 65:
 ===== Resources available for research group ===== ===== Resources available for research group =====
  
-Research groups that have invested in the HPC cluster by purchasing private CPU or GPU nodes benefit from high priority access to these resources. +Research groups that have invested in the HPC cluster by purchasing private CPU or GPU nodes benefit from **high-priority access** to these resources.
  
-While these nodes remain available to all users, owners receive priority scheduling and a designated number of included compute hours per year+Although these nodes remain available to all users, owners receive **priority scheduling** and a predefined annual allocation of compute hours, referred to as [[accounting#resource_accounting_uniformization|billings]].   
 +The advantage of this approach is flexibility: you are free to use any resource on any cluster, rather than being restricted to your own nodes. When doing so, your billings will be consumed.
  
-To check the details of their owned resources, users can run the script ''ug_getNodeCharacteristicsSummary.py'', which provides a summary of the node characteristics within the cluster.+To view details of owned resources, users can run the script:   
 +''ug_getNodeCharacteristicsSummary.py'' 
 +This script provides a summary of the node characteristics within the cluster.
  
-Example:+**Note:** This model ensures **fairness** across all users. Even if some groups own nodes, resources remain shared. Usage beyond the included billings will be **charged according to the standard accounting model**, ensuring equitable access for everyone. 
 + 
 +Output example of the script:
 <code> <code>
 ug_getNodeCharacteristicsSummary.py --partitions private-<group>-gpu private-<group>-cpu --cluster <cluster> --summary ug_getNodeCharacteristicsSummary.py --partitions private-<group>-gpu private-<group>-cpu --cluster <cluster> --summary
Line 57: Line 82:
 ------  -----------  -----  -----  -----------  ------------  --------------------------  -----------  --------------  --------------------------------------  --------- ------  -----------  -----  -----  -----------  ------------  --------------------------  -----------  --------------  --------------------------------------  ---------
 cpu084  N-20.02.151     36    187            0                                                    0  2020-02-01                                                   79 cpu084  N-20.02.151     36    187            0                                                    0  2020-02-01                                                   79
-cpu085  N-20.02.152     36    187            0                                                    0  2020-02-01                                                   79 +[...]
-cpu086  N-20.02.153     36    187            0                                                    0  2020-02-01                                                   79 +
-cpu087  N-20.02.154     36    187            0                                                    0  2020-02-01                                                   79+
 cpu088  N-20.02.155     36    187            0                                                    0  2020-02-01                                                   79 cpu088  N-20.02.155     36    187            0                                                    0  2020-02-01                                                   79
-cpu089  N-20.02.156     36    187            0                                                    0  2020-02-01                                                   79 +[...]
-cpu090  N-20.02.157     36    187            0                                                    0  2020-02-01                                                   79 +
-cpu209  N-17.12.104     20     94            0                                                    0  2017-12-01                                                   41 +
-cpu210  N-17.12.105     20     94            0                                                    0  2017-12-01                                                   41 +
-cpu211  N-17.12.106     20     94            0                                                    0  2017-12-01                                                   41 +
-cpu212  N-17.12.107     20     94            0                                                    0  2017-12-01                                                   41 +
-cpu213  N-17.12.108     20     94            0                                                    0  2017-12-01                                                   41+
 cpu226  N-19.01.161     20     94            0                                                    0  2019-01-01                                                   41 cpu226  N-19.01.161     20     94            0                                                    0  2019-01-01                                                   41
-cpu227  N-19.01.162     20     94            0                                                    0  2019-01-01                                                   41 +[...]
-cpu228  N-19.01.163     20     94            0                                                    0  2019-01-01                                                   41+
 cpu229  N-19.01.164     20     94            0                                                    0  2019-01-01                                                   41 cpu229  N-19.01.164     20     94            0                                                    0  2019-01-01                                                   41
 cpu277  N-20.11.131    128    503            0                                                    0  2020-11-01                                          10        251 cpu277  N-20.11.131    128    503            0                                                    0  2020-11-01                                          10        251
Line 139: Line 155:
  
 <code> <code>
-(baobab)-[sagon@login1 ~]$ ug_slurm_usage_per_user.py -h +(baobab)-[sagon@login1] $ ug_slurm_usage_per_user.py --help 
-usage: ug_slurm_usage_per_user.py [-h] [--user USER] [--start START] [--end END] [--pi PI] [--group GROUP] [--cluster {baobab,yggdrasil,bamboo}] [--all_users] [--report_type {user,account}] [--time_format {Hours,Minutes,Seconds}] +usage: ug_slurm_usage_per_user.py [-h] [--user USER] [--start START] [--end END] [--pi PI] [--group GROUP] [--cluster {baobab,yggdrasil,bamboo}] [--all-users] [--aggregate] [--report-type {user,account}] 
-                                  [--verbose]+                                  [--time-format {Hours,Minutes,Seconds}] [--verbose]
  
 Retrieve HPC utilization statistics for a user or group of users. Retrieve HPC utilization statistics for a user or group of users.
Line 154: Line 170:
   --cluster {baobab,yggdrasil,bamboo}   --cluster {baobab,yggdrasil,bamboo}
                         Cluster name (default: all clusters).                         Cluster name (default: all clusters).
-  --all_users           Include all users under the PI account. +  --all-users           Include all users under the PI account. 
-  --report_type {user,account}+  --aggregate           Aggregate the usage per user. 
 +  --report-type {user,account}
                         Type of report: user (default) or account.                         Type of report: user (default) or account.
-  --time_format {Hours,Minutes,Seconds}+  --time-format {Hours,Minutes,Seconds}
                         Time format: Hours (default), Minutes, or Seconds.                         Time format: Hours (default), Minutes, or Seconds.
   --verbose             Verbose output.   --verbose             Verbose output.
Line 163: Line 180:
  
 By default when you run this script, it will print your past usage of the current month, for all the accounts you are member of. By default when you run this script, it will print your past usage of the current month, for all the accounts you are member of.
 +=== Usage details of a given PI ===
 +<code>
 +(baobab)-[sagon@login1] $ ug_slurm_usage_per_user.py --pi **** --report-type account --start 2025-01-01
 +--------------------------------------------------------------------------------
 +
 +Cluster/Account/User Utilization 2025-01-01T00:00:00 - 2025-12-08T13:59:59 (29512800 secs)
 +
 +Usage reported in TRES Hours
 +
 +--------------------------------------------------------------------------------
 +
 +Cluster    Login    Proper Name    Account    TRES Name      Used
 +---------  -------  -------------  ---------  -----------  ------
 +bamboo                             krusek     billing      176681
 +baobab                             krusek     billing      961209
 +yggdrasil                          krusek     billing           0
 +Total usage: 1.14M
 +</code>
  
 +=== Usage details of all PIs associated with a private group ===
  
 Usage example to see the resource usage from the beginning of 2025 for all the PIs and associate users of the group private_xxx. The group private_xxx owns several compute nodes: Usage example to see the resource usage from the beginning of 2025 for all the PIs and associate users of the group private_xxx. The group private_xxx owns several compute nodes:
 <code> <code>
-(baobab)-[sagon@login1 ~]$ ug_slurm_usage_per_user.py --group private_xxx --start=2025-01-01 --report_type=account+(baobab)-[sagon@login1 ~]$ ug_slurm_usage_per_user.py --group private_xxx --start=2025-01-01 --report-type=account
 -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
  
Line 189: Line 225:
 Total usage: 7.36M Total usage: 7.36M
 </code> </code>
 +
 +=== Aggregate usage by all users of a given PI ===
 +<code>
 +$ ug_slurm_usage_per_user.py --pi ***** --report-type account --start 2025-01-01 --all-users --aggregate
 +--------------------------------------------------------------------------------
 +
 +Cluster/Account/User Utilization 2025-01-01T00:00:00 - 2025-12-08T13:59:59 (29512800 secs)
 +
 +Usage reported in TRES Hours
 +
 +--------------------------------------------------------------------------------
 +
 +Login       Used
 +--------  ------
 +a***u    547746
 +d***i    272634
 +d***on    91178
 +d***l     86860
 +e***j     60649
 +v***d0    37962
 +w***r     29886
 +s***o      9120
 +k***k      1853
 +m***l         1
 +Total usage: 1.14M
 +
 +</code>
 +
 +
  
 === sreport examples === === sreport examples ===
 +
 +<note important>by default, the TRES (tracking resource) shown by sreport is CPUh. If you want to see what will be accounted and billed, you need to use the TRES "billing".</note>
  
 Here are some examples that can give you a starting point : Here are some examples that can give you a starting point :
hpc/accounting.1764841058.txt.gz · Last modified: by Yann Sagon