This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
hpc:storage_on_hpc [2023/11/24 15:11] Yann Sagon [temporary space] |
hpc:storage_on_hpc [2024/05/02 13:51] (current) Gaël Rossignol [Cluster storage] |
||
---|---|---|---|
Line 12: | Line 12: | ||
This is the storage space we offer on our clusters | This is the storage space we offer on our clusters | ||
- | ^ Cluster | + | ^ Cluster |
- | | Baobab | + | | Baobab |
- | | ::: | ''/ | + | | ::: | ''/ |
- | | Yggdrasil | ''/ | + | | ::: | ''/ |
- | | ::: | ''/ | + | | Yggdrasil | ''/ |
+ | | ::: | ''/ | ||
We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, | We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, | ||
Line 83: | Line 84: | ||
To resume the situation, you should clean up some data in your scratch directory. | To resume the situation, you should clean up some data in your scratch directory. | ||
+ | ===== Fast directory ===== | ||
+ | |||
+ | A new fast storage is available dedicated for jobs using multiples nodes and scratchlocal need to be shared between nodes. | ||
+ | |||
+ | ^ Cluster | ||
+ | | Baobab | ||
+ | |||
+ | <note important> | ||
+ | |||
+ | ==== Quota ==== | ||
+ | |||
+ | |||
+ | As the storage is shared by everyone, this ensure a fair scratch usage and prevent users from filling it. We setup a quota based on the total size. | ||
+ | |||
+ | You should clean up some data in your fast directory as soon as your jobs are finished. | ||
====== Local storage ====== | ====== Local storage ====== | ||
Line 336: | Line 352: | ||
reference: (([[https:// | reference: (([[https:// | ||
- | ===== CVMFS ===== | + | ===== CVMFS ===== |
- | The following cvmfs content | + | All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. |
+ | |||
+ | |||
+ | |||
+ | A couple of repository are mounted on the compute and login node such as: | ||
* atlas.cern.ch | * atlas.cern.ch | ||
Line 345: | Line 365: | ||
* grid.cern.ch | * grid.cern.ch | ||
- | The content is mounted using autofs. It means that the root directory | + | |
+ | The content is mounted using autofs | ||
didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of | didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of | ||
minutes and unmount it automatically. | minutes and unmount it automatically. | ||
+ | |||
+ | Other flaghship repository available without further configuration: | ||
+ | |||
+ | * unpacked.cern.ch | ||
+ | * singularity.opensciencegrid.org (container registry) | ||
+ | * software.eessi.io ( | ||
< | < | ||
Line 356: | Line 383: | ||
cvmfs-config.cern.ch | cvmfs-config.cern.ch | ||
</ | </ | ||
+ | |||
+ | The EESSI did a nice tutorial about CVMFS readable on [[https:// | ||
+ | |||
====== Robinhood ====== | ====== Robinhood ====== |