The /scratch
is a temporary workspace that you can use to copy your input data and write your calculation outputs.
The pros of working in the /scratch
are the following:
/home
space (many /scratch
workspaces are available),
Scratches are shared and common storages, their performances depends on your proper use. Hence, they must not contain:
⇒ /home
,⇒ /home
.
Two types of /scratch
are available:
As we have various clusters of different generations of processors (Lake, the newest, E5 and X5, the former), we have several global scratches.
The /scratch/Lake/
, common to CLG* and SLG* queues is available for all cluster Lake nodes.
It contains /scratch/Lake/lake-gfs-scratch
, as a empty check file.
It is accessible from cluster Lake front machines
/scratch/Lake
is available
/scratch/ ├── Lake/ (common to Lake cluster) ├── disk/ (local to some servers) ├── Chimie/ (reserved to chimie usage) ├── Bio/ (reserved to biologie usage) ... └── Project_name/ (reserved to some servers, with dedicated hardware)
The /scratch/E5N/
, common to E5-* queues is available for all cluster E5 nodes.
It contains /scratch/E5N/E5N-gfs-scratch
, as a empty check file.
It is accessible from cluster E5 front machines
/scratch/ ├── E5N/ (E5 scratch, common to E5 cluster) ├── nvme/ (local to some servers) ├── ssd/ (local to some servers) ... └── Project_name/ (local to some servers, with dedicated hardware)
The /scratch/X5/
, common to X5-* queues is available for all cluster x55/x56 nodes.
It contains /scratch/X5/x5570-gfs-scratch
, as a empty check file.
It is accessible from cluster x55/x56 front machines
/scratch/ └── X5/ (X5 scratch, common to X5 cluster)
Some local /scratch
are available only on some nodes. See detailled description or ask staff.
These locals /scratch
are only avaiable in some nodes of the cluster see more complete description .
For global /scratch
, before submitting your job, copy your inputs into a folder on the /scratch/
(ie /scratch/<cluster>/<login>/whatever/
).
The simplest is to reproduce the same hierarchy of folders and files by changing /home
by /scratch/<cluster>
.
Do not forget to modify your submitting files consequently.
At the end of the run, retrieve your results manually, in the / scratch
, from any of the corresponding front nodes.
For local /scratch
you don't know which node will be assigned to your job (it is also valid for the global scratch, but less annoying). Thus, You cannot copy your data to a particular node in advance. You must integrate the copy /scratch
↔ /home
in the submitted script.
Examples are available here.
We don't have a sample script that cleans up automatically. Similarly, if you don't feel comfortable with these scripts, do not hesitate to come and talk about during a Tuesday coffee session.
The global /scratch (glusterfs) were deployed thanks to the upstream work, recipes and integrations made on the CBP experimental platform.