User Tools

Site Tools


hpc:storage_on_hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:storage_on_hpc [2025/03/14 10:40] – [NASAC] Gaël Rossignolhpc:storage_on_hpc [2025/10/15 10:15] (current) – [NASAC] Adrien Albert
Line 163: Line 163:
 ====== Sharing files with other users ====== ====== Sharing files with other users ======
  
-Sometimes you need to share files with some colleagues or your research group.+Sometimesyou may need to share files with colleagues or members of your research group.
  
-We provide two types of shared folders : +We offer two types of shared folders:
-  * in "home" (''/home/share/'') - to share scripts and shared libraries for a common project +
-  * in "scratch" (''/srv/beegfs/scratch/shares/'') - to share datasets for instance +
  
-If you need one, please fill the form on DW: https://dw.unige.ch/openentry.html?tid=hpc +  * **In the "home" directory** (`/home/share/`): Ideal for sharing scripts and common libraries related to a project  
 +  * **In the "scratch" directory** (`/srv/beegfs/scratch/shares/`): Suitable for sharing larger files, such as datasets.
  
-If you are an Outisder user and you don'have access to DW please request to your PI to fill the form.+To request a shared folder, please fill out the form at [[https://dw.unige.ch/openentry.html?tid=hpc|DW]]. As part of the request, you'll be asked if you already have a *group* you’d like to use. If this isn't the caseyou'll need to create one ([[https://dw.unige.ch/openentry.html?tid=adaccess|link]] on the form)
  
 +A **group** is a collection of users used to manage shared access to resources. These groups are defined and stored in the **Active Directory** and allow us to control who can access specific folders.  
 +If you need more details about groups, please contact your **CI** (*correspondant informatique*).
 +
 +If you are an *Outsider* user and do not have access to DW, please ask your **PI** to submit the request on your behalf.
 <note important> <note important>
 You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did. You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did.
Line 230: Line 233:
  
 <code console> <code console>
-(baobab)-[sagon@login2 ~]$ beegfs-get-quota-home-scratch.sh+(baobab)-[sagon@login1 ~]$ beegfs-get-quota-home-scratch.sh
 home dir: /home/sagon home dir: /home/sagon
 scratch dir: /srv/beegfs/scratch/users/s/sagon scratch dir: /srv/beegfs/scratch/users/s/sagon
Line 313: Line 316:
 [sagon@login1 ~] $ dbus-launch bash [sagon@login1 ~] $ dbus-launch bash
 </code> </code>
 +
 +**If you are using sbatch add a sleep after ''dbus-launch'' to be sure initialisation is done**
 +
 +<code>
 +dbus-launch bash
 +sleep 3
 +gio mount ....
 +</code>¨
  
 mount the share, smb in this example: mount the share, smb in this example:
Line 336: Line 347:
 </code> </code>
  
-<note important>The data are only available on the login2 node.  +<note important>The data are only available where gio has been mounted.  
-If you need to access the data on the nodes, you need to mount them there as well in your sbatch script.</note>+If you need to access the on other nodes, you need to mount them there as well in your sbatch script.</note>
  
 If you need to script this, you can put your credentials in a file in your home directory. If you need to script this, you can put your credentials in a file in your home directory.
Line 380: Line 391:
  
 reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]])) reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]]))
 +
 +=== Sometimes mount is not available but you can browse/copy/interract with gio commands === 
 +
 +<code>
 +$ dbus-launch bash
 +
 +$ gio mount smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +Authentication Required
 +Enter user and password for share “hpc_exchange” on “nasac-evs2.unige.ch”:
 +User [rossigng]: s-hpc-share
 +Domain [SAMBA]: ISIS
 +Password:
 +
 +$ gio mount -l
 +Drive(0): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Drive(1): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Mount(0): hpc_exchange on nasac-evs2.unige.ch -> smb://nasac-evs2.unige.ch/hpc_exchange/
 +  Type: GDaemonMount
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/
 +backup
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +toto
 +titi
 +tata.txt
 +
 +$ gio cp smb://nasac-evs2.unige.ch/hpc_exchange/backup/tata /tmp
 +
 +...
 +</code>
 +
 ===== CVMFS ===== ===== CVMFS =====
 All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way.
Line 413: Line 458:
  
 The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo. The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo.
 +===== EOS =====
 +You can mount root filesystems using EOS.
  
 +<code>
 +(bamboo)-[sagon@login1 ~]$ export EOS_MGM_URL=root://eospublic.cern.ch
 +(bamboo)-[sagon@login1 ~]$ export EOS_HOME=/eos/opendata
 +(bamboo)-[sagon@login1 ~]$ eos fuse mount /tmp/sagon/opendata
 +</code>
  
 +<note important>do not mount the filesystem in your home or scratch space as it isn't working because they aren't standard filesystems</note>
 ====== Robinhood ====== ====== Robinhood ======
 Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies. Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies.
hpc/storage_on_hpc.1741948848.txt.gz · Last modified: (external edit)