User Tools

Site Tools


hpc:storage_on_hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
hpc:storage_on_hpc [2023/12/05 17:50]
Yann Sagon [CVMFS]
hpc:storage_on_hpc [2024/05/02 13:51] (current)
Gaël Rossignol [Cluster storage]
Line 12: Line 12:
 This is the storage space we offer on our clusters This is the storage space we offer on our clusters
  
-^ Cluster   BeeGFS path              ^ Total storage size  ^ Nb of servers ^ Nb of targets per servers  ^ Backup     ^ Quota size ^ Quota number files ^ +^ Cluster   Path                     ^ Total storage size  ^ Nb of servers ^ Nb of targets per servers  ^ Backup     ^ Quota size         ^ Quota number files ^ 
-| Baobab    | ''/home/''               | 138 TB              | 4             | 1 meta, 2 storage          | Yes (tape) | 1 TB       | -                  | +| Baobab    | ''/home/''               | 138 TB              | 4             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | 2             | 1 meta, 6 storage          | No         | -          | 10 M               | +| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               | 
-| Yggdrasil | ''/home/''               | 495 TB              | 2             | 1 meta, 2 storage          | Yes (tape) | 1 TB       | -                  | +| :::       | ''/srv/fast''            | 5 TB                | 1             | 1                          | No         | 500G/User 1T/Group | -               | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | 2             | 1 meta, 6 storage          | No         | -          | 10 M               |+| Yggdrasil | ''/home/''               | 495 TB              | 2             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | 
 +| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               |
  
 We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage. We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage.
Line 83: Line 84:
 To resume the situation, you should clean up some data in your scratch directory. To resume the situation, you should clean up some data in your scratch directory.
  
 +===== Fast directory =====
 +
 +A new fast storage is available dedicated for jobs using multiples nodes and scratchlocal need to be shared between nodes.
 +
 +^ Cluster   ^ path              ^ Total storage size  ^ Nb of servers ^ Backup     ^ Quota size ^ Quota number files ^
 +| Baobab    | ''/srv/fast''     | 5 TB                | 1             | No         | 500G by user & 1TB by group       | -                  |
 +
 +<note important>This storage will be erased on each maintenances.</note>
 +
 +==== Quota ====
 +
 +
 +As the storage is shared by everyone, this ensure a fair scratch usage and prevent users from filling it. We setup a quota based on the total size.
 +
 +You should clean up some data in your fast directory as soon as your jobs are finished.
  
 ====== Local storage ====== ====== Local storage ======
Line 350: Line 366:
  
  
-The content is mounted using autofs under the path ''/cvmfs''. It means that the root directory `/cvmfsmay appears empty as long as you+The content is mounted using autofs under the path ''/cvmfs''. It means that the root directory ''/cvmfs'' may appears empty as long as you
 didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of
 minutes and unmount it automatically. minutes and unmount it automatically.
Line 368: Line 384:
 </code> </code>
  
-We are using a squid proxy on app1.baobab to lower the required file transfer. +The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo.
- +
-The EESSI did a nice tutorial readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo. +
- +
-==== Troubleshoot ==== +
- +
-  * ''cvmfs_talk'' see runtime config  +
-  * ''cvmf_config'' see config on disk +
- +
-Check if the proxy is responding: +
-<code> +
-nc -vz PROXY_IP 3128 +
-or +
-tcptraceroute PROXY_IP 3128 +
-or +
-curl --proxy app1:3128 --head url +
-</code> +
-Example: +
-<code> +
-[root@admin1 ~]$ curl --proxy app1:3128 --head http://52.210.8.2/cvmfs/software.eessi.io/.cvmfspublished +
-</code> +
- +
- +
-See cache state (repository must be mounted previously): +
-<code> +
-(baobab)-[root@cpu002 ~]$ cvmfs_config stat -v software.eessi.io +
-Version: 2.10.1.0 +
-PID: 527813 +
-Uptime: 0 minutes +
-Memory Usage: 28744k +
-File Catalog Revision: 203 (expires in 3 minutes) +
-File Catalog ID: 38758963141a14961037b7bc7759648f9a95cdf3 +
-No. Active File Catalogs: 1 +
-Cache Usage: 54581k / 30720000k +
-File Descriptor Usage: 0 / 130560 +
-No. Open Directories:+
-No. IO Errors: 0 +
-Connection: http://aws-eu-central-s1.eessi.science/cvmfs/software.eessi.io through proxy http://192.168.100.13:3128 (online) +
-Usage: 0 open() calls (hitrate 0.000%), 3 opendir() calls +
-Transfer Statistics: 10k read, avg. speed: 11k/s +
-</code> +
- +
- +
- +
-==== EESSI ==== +
-One of the repository served by CVMFS is the software compiled by the [[https://www.eessi-hpc.org/|EESSI (easy) initiative]]. +
- +
-EESSI "easy" distribute softwares  +
-  * multiple architectures (ARM, RISC, INTEL, AMD, NVIDIA etc) +
-  * can leverage the lack of manpower +
-  * usable in commercial cloud worldwide +
-  * optimized for specific generations of microprocessors (AVX, AVX512, ARM SVE) +
-  * integrated with lmod +
-  * arch-spec (CPU detection) +
- +
-Usage: +
- +
-<code> +
-(baobab)-[sagon@login2 ~]$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash +
-Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06! +
-archdetect says x86_64/generic +
-Using x86_64/generic as software subdirectory. +
-Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/generic/modules/all as the directory to be added to MODULEPATH. +
-Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/generic/.lmod/lmodrc.lua +
-Initializing Lmod... +
-Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/generic/modules/all to $MODULEPATH... +
-Environment set up to use EESSI (2023.06), have fun! +
-</code>+
  
  
hpc/storage_on_hpc.1701795015.txt.gz · Last modified: 2023/12/05 17:50 by Yann Sagon