User Tools

Site Tools


hpc:storage_on_hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:storage_on_hpc [2025/03/11 13:07] – [Sharing files with other users] Adrien Alberthpc:storage_on_hpc [2025/10/15 10:15] (current) – [NASAC] Adrien Albert
Line 163: Line 163:
 ====== Sharing files with other users ====== ====== Sharing files with other users ======
  
-Sometimes you need to share files with some colleagues or your research group.+Sometimesyou may need to share files with colleagues or members of your research group.
  
-We provide two types of shared folders : +We offer two types of shared folders:
-  * in "home" (''/home/share/'') - to share scripts and shared libraries for a common project +
-  * in "scratch" (''/srv/beegfs/scratch/shares/'') - to share datasets for instance +
  
-If you need one, please fill the form on DW: https://dw.unige.ch/openentry.html?tid=hpc +  * **In the "home" directory** (`/home/share/`): Ideal for sharing scripts and common libraries related to a project  
 +  * **In the "scratch" directory** (`/srv/beegfs/scratch/shares/`): Suitable for sharing larger files, such as datasets.
  
-If you are an Outisder user and you don'have access to Digital Worplace please request to your PI to fill the form.+To request a shared folder, please fill out the form at [[https://dw.unige.ch/openentry.html?tid=hpc|DW]]. As part of the request, you'll be asked if you already have a *group* you’d like to use. If this isn't the caseyou'll need to create one ([[https://dw.unige.ch/openentry.html?tid=adaccess|link]] on the form)
  
 +A **group** is a collection of users used to manage shared access to resources. These groups are defined and stored in the **Active Directory** and allow us to control who can access specific folders.  
 +If you need more details about groups, please contact your **CI** (*correspondant informatique*).
 +
 +If you are an *Outsider* user and do not have access to DW, please ask your **PI** to submit the request on your behalf.
 <note important> <note important>
 You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did. You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did.
Line 230: Line 233:
  
 <code console> <code console>
-(baobab)-[sagon@login2 ~]$ beegfs-get-quota-home-scratch.sh+(baobab)-[sagon@login1 ~]$ beegfs-get-quota-home-scratch.sh
 home dir: /home/sagon home dir: /home/sagon
 scratch dir: /srv/beegfs/scratch/users/s/sagon scratch dir: /srv/beegfs/scratch/users/s/sagon
Line 303: Line 306:
 ===== NASAC  ===== ===== NASAC  =====
  
-<WRAP round alert 50%> 
-CIFS is not working due to a dummy patch integrated by the fast network supplier (Infiniband by Mellanox/Nvidia). 
  
-2024-12-05 Update: Since Rocky9 was deployed on Bamboo during the last maintenance, the module now seems to be available again. You should be able to mount your NASAC share on both the login nodes and compute nodes. 
- 
-Rocky9 will also be deployed on Baobab and Yggdrasil during the next maintenances. 
- 
-For more information: https://hpc-community.unige.ch/t/2024-current-issues-on-hpc-cluster/3245/15 
-</WRAP> 
  
 If you need to mount an external share (NAS for example) on Baobab from command line, you can proceed as  If you need to mount an external share (NAS for example) on Baobab from command line, you can proceed as 
Line 319: Line 314:
  
 <code console> <code console>
-[sagon@login2 ~] $ dbus-launch bash+[sagon@login1 ~] $ dbus-launch bash
 </code> </code>
 +
 +**If you are using sbatch add a sleep after ''dbus-launch'' to be sure initialisation is done**
 +
 +<code>
 +dbus-launch bash
 +sleep 3
 +gio mount ....
 +</code>¨
  
 mount the share, smb in this example: mount the share, smb in this example:
  
 <code console> <code console>
-[sagon@login2 ~] $ gio mount smb://server_name/share_name+[sagon@login1 ~] $ gio mount smb://server_name/share_name
 </code> </code>
  
Line 341: Line 344:
  
 <code console> <code console>
-[sagon@login2 ~] $ gio mount -u smb://server_name/share_name+[sagon@login1 ~] $ gio mount -u smb://server_name/share_name
 </code> </code>
  
-<note important>The data are only available on the login2 node.  +<note important>The data are only available where gio has been mounted.  
-If you need to access the data on the nodes, you need to mount them there as well in your sbatch script.</note>+If you need to access the on other nodes, you need to mount them there as well in your sbatch script.</note>
  
 If you need to script this, you can put your credentials in a file in your home directory. If you need to script this, you can put your credentials in a file in your home directory.
Line 358: Line 361:
 Mount example using credentials in a script: Mount example using credentials in a script:
 <code console> <code console>
-[sagon@login2 ~] $ gio mount smb://server_name/share_name < .credentials+[sagon@login1 ~] $ gio mount smb://server_name/share_name < .credentials
 </code> </code>
  
Line 372: Line 375:
  
 <code console> <code console>
-[sagon@login2 ~] $ ps ux | grep -e '[g]vfsd-fuse'+[sagon@login1 ~] $ ps ux | grep -e '[g]vfsd-fuse'
 sagon    196919  0.0  0.0 387104  3376 ?        Sl   08:49   0:00 /usr/libexec/gvfsd-fuse /home/sagon/.gvfs -f -o big_writes sagon    196919  0.0  0.0 387104  3376 ?        Sl   08:49   0:00 /usr/libexec/gvfsd-fuse /home/sagon/.gvfs -f -o big_writes
 </code> </code>
Line 381: Line 384:
  
 <code console> <code console>
-[sagon@login2 ~] $ pgrep -a -U $(id -u) dbus+[sagon@login1 ~] $ pgrep -a -U $(id -u) dbus
 196761 /usr/bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session 196761 /usr/bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session
 224317 /usr/bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session 224317 /usr/bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session
Line 388: Line 391:
  
 reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]])) reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]]))
 +
 +=== Sometimes mount is not available but you can browse/copy/interract with gio commands === 
 +
 +<code>
 +$ dbus-launch bash
 +
 +$ gio mount smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +Authentication Required
 +Enter user and password for share “hpc_exchange” on “nasac-evs2.unige.ch”:
 +User [rossigng]: s-hpc-share
 +Domain [SAMBA]: ISIS
 +Password:
 +
 +$ gio mount -l
 +Drive(0): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Drive(1): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Mount(0): hpc_exchange on nasac-evs2.unige.ch -> smb://nasac-evs2.unige.ch/hpc_exchange/
 +  Type: GDaemonMount
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/
 +backup
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +toto
 +titi
 +tata.txt
 +
 +$ gio cp smb://nasac-evs2.unige.ch/hpc_exchange/backup/tata /tmp
 +
 +...
 +</code>
 +
 ===== CVMFS ===== ===== CVMFS =====
 All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way.
Line 421: Line 458:
  
 The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo. The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo.
 +===== EOS =====
 +You can mount root filesystems using EOS.
  
 +<code>
 +(bamboo)-[sagon@login1 ~]$ export EOS_MGM_URL=root://eospublic.cern.ch
 +(bamboo)-[sagon@login1 ~]$ export EOS_HOME=/eos/opendata
 +(bamboo)-[sagon@login1 ~]$ eos fuse mount /tmp/sagon/opendata
 +</code>
  
 +<note important>do not mount the filesystem in your home or scratch space as it isn't working because they aren't standard filesystems</note>
 ====== Robinhood ====== ====== Robinhood ======
 Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies. Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies.
hpc/storage_on_hpc.1741698429.txt.gz · Last modified: (external edit)