This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
hpc:best_practices [2020/11/05 18:10] Massimo Brero |
hpc:best_practices [2023/05/26 15:07] (current) Adrien Albert [First steps] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | < | + | {{METATOC 1-5}} |
- | This page gives best practices and tips on how to use the clusters **Baobab** and **Yggdrasil**. | ||
====== Introduction ====== | ====== Introduction ====== | ||
+ | This page gives best practices and tips on how to use the clusters **Baobab** and **Yggdrasil**. | ||
+ | |||
An HPC cluster is an advanced, complex and always-evolving piece of technology. It's easy to forget details and make mistakes when using one, so don't hesitate to check this section every now and then, yes, even if you are the local HPC guru in your team! There' | An HPC cluster is an advanced, complex and always-evolving piece of technology. It's easy to forget details and make mistakes when using one, so don't hesitate to check this section every now and then, yes, even if you are the local HPC guru in your team! There' | ||
Line 14: | Line 15: | ||
For your first steps we recommend the following : | For your first steps we recommend the following : | ||
* Check the [[hpc: | * Check the [[hpc: | ||
- | * Connect to the login node of the cluster you are planning to use : | + | * Connect to the login node of the cluster you are planning to use : [[hpc: |
- | * [[hpc: | + | |
* Check the rest of this page for best practices and smart use of the HPC resources. | * Check the rest of this page for best practices and smart use of the HPC resources. | ||
* [[hpc: | * [[hpc: | ||
- | * Understand how to load your libraries/ | + | * Understand how to load your libraries/ |
- | * [[applications_and_libraries|Applications and libraries]] | + | * Learn how to write a Slurm '' |
- | * Learn how to write a Slurm '' | + | |
- | * [[slurm|Slurm and job management]] | + | |
====== Rules and etiquette ====== | ====== Rules and etiquette ====== | ||
Line 38: | Line 36: | ||
* Loading applications and libraries should always be done using '' | * Loading applications and libraries should always be done using '' | ||
* Pro tip: you can even force the version to have consistent results or to survive an OS migration, for example. | * Pro tip: you can even force the version to have consistent results or to survive an OS migration, for example. | ||
- | * When an application is not available through '' | + | * When an application is not available through '' |
* You can request the HPC team to install a new software or version of a library in order to load it through '' | * You can request the HPC team to install a new software or version of a library in order to load it through '' | ||
* You **cannot** install new software with '' | * You **cannot** install new software with '' | ||
Line 61: | Line 59: | ||
The same goes for the storage. As you [[hpc/ | The same goes for the storage. As you [[hpc/ | ||
- | But even with scientific data, if you are storing thousands of files and using hundreds of GB that you don't really need, at some point we will have to buy more storage. The storage servers are no different than compute nodes : they also need electricity to run and AC to be cooled down. So deleted | + | But even with scientific data, if you are storing thousands of files and using hundreds of GB that you don't really need, at some point we will have to buy more storage. The storage servers are no different than compute nodes : they also need electricity to run and AC to be cooled down. So deleting |
Besides the quantity of data, remember it is important //where// you store your data. For instance, we back up the content of your '' | Besides the quantity of data, remember it is important //where// you store your data. For instance, we back up the content of your '' | ||
Line 80: | Line 78: | ||
* CPUs, which are grouped in [[hpc/ | * CPUs, which are grouped in [[hpc/ | ||
- | * [[hpc/applications_and_libraries#nvidia_cuda|GPGPUs]] which are accelerator for software that support them | + | * [[hpc/hpc_clusters#compute_nodes|GPGPUs]] which are accelerator for software that support them |
* memory (RAM) per core or per node, 3GB by default | * memory (RAM) per core or per node, 3GB by default | ||
* disk space | * disk space | ||
Line 87: | Line 85: | ||
===== Single thread vs multi thread vs distributed jobs ===== | ===== Single thread vs multi thread vs distributed jobs ===== | ||
- | There are three job categories each with different needs: | + | See [[hpc:slurm# |
- | * **single threaded**, which only uses **one CPU**. | ||
- | * Example : Python, plain R, etc. | ||
- | * **multi threaded**, which can use **all the CPUs** of a compute node (best case scenario). | ||
- | * Example : Matlab, Stata-MP, etc. | ||
- | * **distributed**, | ||
- | * Example : Palabos | ||
- | |||
- | There are also **hybrid** jobs, where each tasks of such a job behave like a multi-threaded job. This is not very common and we won't cover this case. | ||
- | |||
- | FIXME On the cluster, we have two type of partitions with a fundamental difference: | ||
- | |||
- | * with resources allocated per compute node: shared-EL7, parallel-EL7 | ||
- | * with resources allocated per cpu: all the other partitions | ||
===== Bad CPU usage ===== | ===== Bad CPU usage ===== | ||
Let's take an example of a **single threaded job**. You should clearly use a partition which allows to request a single CPU, such as '' | Let's take an example of a **single threaded job**. You should clearly use a partition which allows to request a single CPU, such as '' | ||
- | |||
{{ : | {{ : | ||
- | image | ||
Line 147: | Line 130: | ||
* This will help you choose the parameters ''< | * This will help you choose the parameters ''< | ||
* [[hpc/ | * [[hpc/ | ||
- | * This will help you choose the parameters '' | + | * This will help you choose the parameters '' |
* How much memory does my job need ? | * How much memory does my job need ? | ||
* This will help you choose the parameters ''< | * This will help you choose the parameters ''< | ||
Line 155: | Line 138: | ||
* Do I want to receive email notification ? | * Do I want to receive email notification ? | ||
* This is optional, but you can specify the level of details you want with the ''< | * This is optional, but you can specify the level of details you want with the ''< | ||
+ | |||
+ | ====== Transfer data from cluster to another with ====== | ||
+ | ===== Rsync ===== | ||
+ | This help assumes you want transfer the directory ''< | ||
+ | |||
+ | |||
+ | __**Rsync options: | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | * ''< | ||
+ | |||
+ | 1) Go to your directory containing ''< | ||
+ | < | ||
+ | (baobab)-[toto@login2 ~]$cd $HOME/ | ||
+ | </ | ||
+ | |||
+ | 2) Set the variables (or not) | ||
+ | < | ||
+ | (baobab)-[toto@login2 my_projects]$ DST=$HOME/ | ||
+ | (baobab)-[toto@login2 my_projects]$ DIR=the_best_project_ever | ||
+ | (baobab)-[toto@login2 my_projects]$ YGGDRASIL=login1.yggdrasil | ||
+ | </ | ||
+ | 3) Run the rsync | ||
+ | < | ||
+ | (baobab)-[toto@login2 my_projects]$ rsync -aviuzPrg ${DIR} ${YGGDRASIL}: | ||
+ | </ |