User Tools

Site Tools


hpc:storage_on_hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:storage_on_hpc [2026/01/20 10:50] – [NASAC] Yann Sagonhpc:storage_on_hpc [2026/05/12 15:30] (current) – [Temporary shared space] Yann Sagon
Line 12: Line 12:
 This is the storage space we offer on our clusters This is the storage space we offer on our clusters
  
-^ Cluster   ^ Path                     ^ Total storage size  ^ Nb of servers ^ Nb of targets per servers  ^ Backup     ^ Quota size         ^ Quota number files ^ +^ Cluster   ^ Path                     ^ Total storage size  ^ Disk Type   ^ Backup     ^ Quota size         ^ Quota number files ^ 
-| Baobab    | ''/home/''               | 138 TB              | 4             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | +| Baobab    | ''/home/''               | 138 TB              | HDD         | Yes (tape) | 1 TB               | -                  | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               | +| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | HDD         | No         | -                  | 10 M               | 
-| :::       | ''/srv/fast''            | 5 TB                | 1             | 1                          | No         | 500G/User 1T/Group | -               | +| :::       | ''/srv/fast''            | 5 TB                | SSD         | No         | 500G/User 1T/Group | -               | 
-| Yggdrasil | ''/home/''               | 495 TB              | 2             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | +| Yggdrasil | ''/home/''               | 495 TB              | HDD         | Yes (tape) | 1 TB               | -                  | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               |+| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | HDD         | No         | -                  | 10 M               | 
 +| Bamboo    | ''/home/''               | 378 TB              | SSD         | Yes (tape) | 1 TB               | -                  | 
 +| :::       | ''/srv/beegfs/scratch/'' | 1.1 PB              | HDD         | No         | -                  | 10 M               |
  
 We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage. We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage.
Line 157: Line 159:
 ===== Temporary shared space ===== ===== Temporary shared space =====
 If you need to access the data from more than one node, you can use a space reachable from all your jobs running on the same compute node. When you have no more jobs running on the node, the content of the storage is erased. If you need to access the data from more than one node, you can use a space reachable from all your jobs running on the same compute node. When you have no more jobs running on the node, the content of the storage is erased.
 +<WRAP center round important 60%>
 +The new path for shared space is ''/srv/share/users/'' instead of ''/share/users''
 +</WRAP>
  
-The path is the following: ''/share/users/${SLURM_JOB_USER:0:1}/${SLURM_JOB_USER}''+ 
 +The path is the following: ''/srv/share/users/${SLURM_JOB_USER:0:1}/${SLURM_JOB_USER}''
  
 See here for a usage example: https://hpc-community.unige.ch/t/local-share-directory-beetween-jobs-on-compute/2893 See here for a usage example: https://hpc-community.unige.ch/t/local-share-directory-beetween-jobs-on-compute/2893
hpc/storage_on_hpc.1768906239.txt.gz · Last modified: by Yann Sagon