Disk Quota and Network mounted FileSystems (NFS)
Tier 1 Facility provides several file sytems for its users. These file
sytems are configured in different ways for different purposes.
Information is provided below on purpose and configuration differences,
quota and usage statistics, instructions on how to check your quota and
options available to you if you should run out of quota.
What is the maximum quota?
The LPC coordinators have decided to
set an individual user quota of 2 TB per user in EOS
/store/user, 100GB in NFS
/uscms_data, and 2GB in
- This is the link with instructions for EOS quota increase.
- For NFS quota increase, create a ServiceNow Request to "Scientific Computing: Distributed Computing", give your existing quota, increase request, explain the reason an increase is needed for your analysis, and end date.
NFS Disk Space
/uscmsBackup snapshots for
/uscmsare daily starting at 18:00 and kept for 4 days.
Purpose: Home areas, area small quota (2GB), backed up to tape.
Note that as of May 4, 2017, the home directory path is
/u/is the first letter of your
username. There is a soft link to the previous path in place.
Nobackup Data Areas: /uscms_data/d1There are several data areas located on the NFS disk. The
/uscms_data/d1area is simply a collection of symbolic links that point to a user's actual data area (currently
Its best if users use the
~username/nobackup) path to access their data just in case the actual data area needs to be moved to a different file system for space reasons.
Individual user quotas are set to a default of 100GB in the data areas, LPC collaborative group default quota is 1TB. There are NO BACKUPS for this area.
/uscms_data/d2No tape backups
Purpose Data area with quotas ( large for LPC groups), not backed up to tape
/uscms_data/d3No tape backups
Purpose: Data area with quotas (large for LPC groups), not backed up to tape
/uscmst1b_scratch/lpc1/3DayLifetimeNo tape backups
Purpose: Data area WITHOUT quota for LPC groups and for 3DayLifetime; not backed up in any fashion. Do not store unreproducible work in this area - there are NO BACKUPS. The 3DayLifetime area (
/uscmst1b_scratch/lpc1/3DayLifetime) is accessible by ALL users. If you do not have a directory, create one with
mkdir /uscmst1b_scratch/lpc1/3DayLifetime/username(with your
username). As the name suggests, files stored here are automatically removed after 3 days. Since there are no quotas on this file system we expect users to clean up their directories on a regular basis. If the file system begins to get full we will send out email to the users asking them to clean up their areas.
Checking your quota usage on NFS mounted areas:Users can use the UNIX "quota" command to check their disk/quota usage. The output includes information on all NFS mounted file systems even those that may not have quotas enforced.
An example - over quota on
[username@cmslpc42 ]$ quota -s Disk quotas for user username (uid 55555): Filesystem blocks quota limit grace files quota limit grace cms-nas-0.fnal.gov:/uscms 1158M 0 2048M 25746k 0 0 cmsnfs-scratch1.fnal.gov:/uscms/data3/ 105G* 100G 120G 6 days 199k 0 0
-soption tells the quota command to try to use units for showing the usage/limit output.
- The most pertinent number fields in this output are the first and third. The first shows how much disk is being used and the third shows what your limit is set to.
- In this example, the
*is indicating that this user is over quota(second item) on
~/nobackup, however they will still be able to write files until they reach the limit within the grace period (fourth field).
- The grace period listed in this example is
6 daysas the user just went over quota: once that time has passed, the user will not be able to write more than 100GB (their quota).
- If, instead, grace lists
none, the user will be unable to write over the
- Note that the
/uscmsarea for your home directory has only a hard limit, so there will be no warning or grace, and once you are over limit you will not be able to write to that area.
- The first mounted disk listed is your home area, the second is your (soft linked)
nobackuparea, which is centrally linked as
/uscms_data/d1, which is mounted as
/uscms/data3, and soft linked as
Here is an example of a user who is over quota on
[username@cmslpc42 ]$ quota -s Disk quotas for user username (uid 55555): Filesystem blocks quota limit grace files quota limit grace cms-nas-0.fnal.gov:/uscms 2048M 0 2048M 25960k 0 0 cmsnfs-scratch1.fnal.gov:/uscms/data3/ 47720M 100G 120G 424k 0 0
Note that the "blocks" used is the same size as the "limit", and there is no
*. This is a hard cutoff, and if you exceed it, you will see an error message is like this:
[username@cmslpc42 ~/temp]$ cp -pr testFile.root test.root cp: closing `test.root': Disk quota exceeded
Going over quota on your home directory will have unintended consequences like not being able to write an
~/.Xauthorityfile upon login and use X-window forwarding, so you will need to cleanup and/or move files to other filesystems.
What to do if you exceed your quota limit on NFS:
- Remove uneeded files
- Utilize the 3DayLifetime area in
/uscmst1b_scratch/lpc1/3DayLifetime/username. As the name implies any data stored here is automatically removed after 3 days.
- Copy data to EOS (recursive examples)
- If you are a member of one of the LPC collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup(quota limit set;
ksu lpcgroupnameto the LPC Collaborative group user before writing)
- Note: To request LPC Collaborative group access, fill out the ServiceNow form in Scientific Computing: Add Account to LPC Group. Be sure to include the reason to join.
- This is the link with instructions for EOS quota increase, as well as how to find out your current quota.
- There is no longer a ServiceNow ticket for NFS disk quota increases, and LPC coordination does not approve NFS quota increase requests.
LPC Collaborative Group
- To check membership in a LPC Collaborative group (access for EOS and/or NFS space in the group), at the cmslpc-sl6 command line do:
getent group | grep lpcgroupname. The output will look something like the following (partial output):
- Group members of
lpcgroupnameare listed in the first line and are:
- To request LPC Collaborative group access (for EOS and/or NFS space usage), fill out the ServiceNow form in Service Request Catalog: Scientific Computing: Add Account to LPC Group. Be sure to include the reason to join, this will be approved by the specific LPC Collaborative Group Convener(s). Once the account is added, it takes ~1-2 hours (during Fermilab business hours) to propagate to all the cmslpc systems.
- If you are a member of one of the LPC Collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup(quota limit set;
ksu lpcgroupnameto the LPC Collaborative group user before writing) - note that for
xrdcptransfers you do NOT
ksu, do them as yourself.
- note that the
/store/group/lpcgroupnamearea is a soft link to
/store/user/lpcgroupnameon EOS T3_US_FNALLPC, as CRAB requires the
group, but the local filesystem puts everything in
- File permissions for files made in the group account NFS areas are by default only allowed to write by LPC Collaborative group members (either as
ksu lpcgroupnameor as yourself from Crab3 jobs or xrdcp.) By default, all CMS users with a Fermilab account can read the files on NFS, and all EOS files can be read by all CMS users with a valid grid certificate via xrootd.
- LPC Collaborative group areas are created for broad collaboration across institutions. All LPC Collaborative group requests, both for new accounts and existing quota increases have to be approved by LPC coordinators.
- To check a list of existing LPC Collaborative groups, in Fermilab ServiceNow, after authenticating with your Services username and password, the form:
Membership of LPC Collaborative Group
Add user to LPC Collaborative Group
LPC Collaborative Group usage
Request a new LPC Collaborative Group Account - policies
- Group area name (must have "lpc" at the start now):
- Requested allocation of space in EOS (give actual and logical)
- Names and institutions of the users, with the number of users (split as US and international)
- FNAL usernames of users
- Who will be the approver(s)/convener(s) of the group?
- Reason for request (has to be collaborative in nature):
EOS disk spaceTo understand how much space a user or LPC Collaborative group has on the EOS (T3_US_FNALLPC
/store/user) filesystem, consult the dedicated EOS Mass Storage page.
NFS disk space and condor batchThe cmslpc condor batch system has worker nodes which as of October 1, 2017, do not have any of the above NFS disks mounted on them. This page describes examples of modifying cmslpc condor batch scripts to not use the above NFS disks.
AFS mountsAs of Spring, 2018, the LPC CAF (cmslpc cluster) no longer mounts
/afsdirectories on interactive or worker nodes. You may still access the
/afsfilesystem from CERN lxplus, but be aware that it is being phased out.