This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
hpc:storage_on_hpc [2023/10/09 10:13] Yann Sagon [Sharing files with other users] |
hpc:storage_on_hpc [2024/05/02 13:51] (current) Gaël Rossignol [Cluster storage] |
||
---|---|---|---|
Line 12: | Line 12: | ||
This is the storage space we offer on our clusters | This is the storage space we offer on our clusters | ||
- | ^ Cluster | + | ^ Cluster |
- | | Baobab | + | | Baobab |
- | | ::: | ''/ | + | | ::: | ''/ |
- | | Yggdrasil | ''/ | + | | ::: | ''/ |
- | | ::: | ''/ | + | | Yggdrasil | ''/ |
+ | | ::: | ''/ | ||
We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, | We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, | ||
Line 83: | Line 84: | ||
To resume the situation, you should clean up some data in your scratch directory. | To resume the situation, you should clean up some data in your scratch directory. | ||
+ | ===== Fast directory ===== | ||
+ | |||
+ | A new fast storage is available dedicated for jobs using multiples nodes and scratchlocal need to be shared between nodes. | ||
+ | |||
+ | ^ Cluster | ||
+ | | Baobab | ||
+ | |||
+ | <note important> | ||
+ | |||
+ | ==== Quota ==== | ||
+ | |||
+ | |||
+ | As the storage is shared by everyone, this ensure a fair scratch usage and prevent users from filling it. We setup a quota based on the total size. | ||
+ | |||
+ | You should clean up some data in your fast directory as soon as your jobs are finished. | ||
====== Local storage ====== | ====== Local storage ====== | ||
Line 101: | Line 117: | ||
</ | </ | ||
- | ===== temporary | + | ===== Temporary private |
On **each** compute node, you can use the following private ephemeral spaces: | On **each** compute node, you can use the following private ephemeral spaces: | ||
Line 110: | Line 126: | ||
Those places are private and only accessible by your job. | Those places are private and only accessible by your job. | ||
+ | ===== Temporary shared space ===== | ||
+ | If you need to access the data from more than one node, you can use a space reachable from all your jobs running on the same compute node. When you have no more jobs running on the node, the content of the storage is erased. | ||
+ | The path is the following: ''/ | ||
+ | See here for a usage example: https:// | ||
====== Sharing files with other users ====== | ====== Sharing files with other users ====== | ||
Line 176: | Line 196: | ||
===== Check disk usage on the clusters ===== | ===== Check disk usage on the clusters ===== | ||
- | Since ''/ | + | ==== Check disk usage on home and scratch ==== |
+ | |||
+ | |||
+ | Since ''/ | ||
The script '' | The script '' | ||
<code console> | <code console> | ||
- | [brero@login2 ~]$ beegfs-get-quota-home-scratch.sh | + | (baobab)-[sagon@login2 ~]$ beegfs-get-quota-home-scratch.sh |
- | USER > /home | + | home dir: /home/sagon |
- | brero > 1.04 GiB | | + | scratch dir: / |
+ | |||
+ | user/ | ||
+ | storage | ||
+ | ----------------------------|------||------------|------------||---------|--------- | ||
+ | home | sagon|240477|| | ||
+ | scratch | ||
</ | </ | ||
- | N.B. This includes all your data in '' | + | <WRAP center round tip 60%> |
+ | This includes all your data in '' | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | |||
+ | ==== Check disk usage on NASAC ==== | ||
If you have space as well in ''/ | If you have space as well in ''/ | ||
Line 317: | Line 352: | ||
reference: (([[https:// | reference: (([[https:// | ||
- | ===== CVMFS ===== | + | ===== CVMFS ===== |
- | The following cvmfs content | + | All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. |
+ | |||
+ | |||
+ | |||
+ | A couple of repository are mounted on the compute and login node such as: | ||
* atlas.cern.ch | * atlas.cern.ch | ||
Line 326: | Line 365: | ||
* grid.cern.ch | * grid.cern.ch | ||
- | The content is mounted using autofs. It means that the root directory | + | |
+ | The content is mounted using autofs | ||
didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of | didn't access explicitly one of the child directory. Doing so will mount the repository for a couple of | ||
minutes and unmount it automatically. | minutes and unmount it automatically. | ||
+ | |||
+ | Other flaghship repository available without further configuration: | ||
+ | |||
+ | * unpacked.cern.ch | ||
+ | * singularity.opensciencegrid.org (container registry) | ||
+ | * software.eessi.io ( | ||
< | < | ||
Line 337: | Line 383: | ||
cvmfs-config.cern.ch | cvmfs-config.cern.ch | ||
</ | </ | ||
+ | |||
+ | The EESSI did a nice tutorial about CVMFS readable on [[https:// | ||
+ | |||
====== Robinhood ====== | ====== Robinhood ====== |