Disk Quota and Network mounted FileSystems (NFS)
The FNAL LPC Analysis Cluster provides several file sytems for its users. These filesytems are configured in different ways for different purposes.
Information is provided below on purpose and configuration differences,
quota and usage statistics, instructions on how to check your quota and
options available to you if you should run out of quota.
What is the maximum personal quota?
The LPC Coordinators have decided to
set an individual user quota of 2 TB per user in EOS
/store/user, 200GB in NFS
/uscms_data, and 2GB in
- This is the link with instructions for EOS quota increase.
- NFS quota increases are not given.
NFS Disk Space
/uscmsBackup snapshots for
/uscmsare daily starting at 18:00 and kept for 4 days.
Purpose: Home areas, area small quota (2GB), backed up to tape.
Note that as of May 4, 2017, the home directory path is
/u/is the first letter of your
username. There is a soft link to the previous path in place.
To recover accidentally removed files, Submit a "I'm having a problem" trouble ticket with Fermilab LPC Service Portal, using your Fermilab single sign on, report what file(s), and what date they were removed for recovery. Note that files in
~username/nobackupare on a different disk and not backed up, as discussed below. Files in EOS are not backed up either.
Nobackup Data Areas: /uscms_data/d1There are several data areas located on the NFS disk. The
/uscms_data/d1area is simply a collection of symbolic links that point to a user's actual data area (currently
Its best if users use the
~username/nobackup) path to access their data just in case the actual data area needs to be moved to a different file system for space reasons.
Individual user quotas are set to a default of 200GB in the data areas, LPC collaborative group default quota is 1TB. There are NO BACKUPS for this area, and no way to recover accidentally deleted files.
/uscms_data/d2No tape backups
Purpose Data area with quotas ( large for LPC groups), not backed up to tape
/uscms_data/d3No tape backups
Purpose: Data area with quotas (large for LPC groups), not backed up to tape
/uscmst1b_scratch/lpc1/3DayLifetimeNo tape backups
Purpose: Data area WITHOUT quota for LPC groups and for 3DayLifetime; not backed up in any fashion. Do not store unreproducible work in this area - there are NO BACKUPS. The 3DayLifetime area (
/uscmst1b_scratch/lpc1/3DayLifetime) is accessible by ALL users. If you do not have a directory, create one with
mkdir /uscmst1b_scratch/lpc1/3DayLifetime/username(with your
username). As the name suggests, files stored here are automatically removed after 3 days. Since there are no quotas on this file system we expect users to clean up their directories on a regular basis. If the file system begins to get full we will send out email to the users asking them to clean up their areas.
Checking your quota usage on NFS mounted areas:Users can use the UNIX "quota" command to check their disk/quota usage. The output includes information on all NFS mounted file systems even those that may not have quotas enforced.
An example - over quota on
[username@cmslpc142 ]$ quota -s Disk quotas for user username (uid 55555): Filesystem blocks quota limit grace files quota limit grace cms-nas-0.fnal.gov:/uscms 1158M 0 2048M 25746k 0 0 cmsnfs-scratch1.fnal.gov:/uscms/data3/ 212G* 200G 220G 6days 538k 0 0
-soption tells the quota command to try to use units for showing the usage/limit output.
- The most pertinent number fields in this output are the first and third. The first shows how much disk is being used and the third shows what your limit is set to.
- In this example, the
*is indicating that this user is over quota(second item) on
~/nobackup, however they will still be able to write files until they reach the limit within the grace period (fourth field).
- The grace period listed in this example is
6 daysas the user just went over quota: once that time has passed, the user will not be able to write more than 200GB (their quota).
- If, instead, grace lists
none, the user will be unable to write over the
- Note that the
/uscmsarea for your home directory has only a hard limit, so there will be no warning or grace, and once you are over limit you will not be able to write to that area.
- The first mounted disk listed is your home area, the second is your (soft linked)
nobackuparea, which is centrally linked as
/uscms_data/d1, which is mounted as
/uscms/data3, and soft linked as
Here is an example of a user who is over quota on
[username@cmslpc142 ]$ quota -s Disk quotas for user username (uid 55555): Filesystem blocks quota limit grace files quota limit grace cms-nas-0.fnal.gov:/uscms 2048M 0 2048M 25960k 0 0 cmsnfs-scratch1.fnal.gov:/uscms/data3/ 47720M 200G 220G 424k 0 0
Note that the "blocks" used is the same size as the "limit", and there is no
*. This is a hard cutoff, and if you exceed it, you will see an error message is like this:
[username@cmslpc142 ~/temp]$ cp -pr testFile.root test.root cp: closing `test.root': Disk quota exceeded
Going over quota on your home directory will have unintended consequences like not being able to write an
~/.Xauthorityfile upon login and use X-window forwarding, so you will need to cleanup and/or move files to other filesystems.
What to do if you exceed your quota limit on NFS:
- Remove uneeded files
- Utilize the 3DayLifetime area in
/uscmst1b_scratch/lpc1/3DayLifetime/username. As the name implies any data stored here is automatically removed after 3 days.
- Copy data to EOS (recursive examples)
- If you are a member of one of the LPC collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup(quota limit set;
ksu lpcgroupnameto the LPC Collaborative group user before writing)
- Note: To request LPC Collaborative group access, fill out the LPC Service Portal form: "Modify Account on LPC Collaborative Group". Use Fermilab Single Sign on (choosing Services username/password or Kerberos authentication after proper browser configuration for Kerberos). (Show me how - outdated 9/25/2018 [β]). Choose "Add" to add a user to the group. Be sure to include the reason to join. The LPC Collaborative group convener(s) will approve or reject. Once the account is added, it takes ~1-3 hours (during Fermilab business hours, therefore up to 1 business day depending on timing) to propagate to all the cmslpc systems.
- This is the link with instructions for EOS quota increase, as well as how to find out your current quota.
- There is no longer a ServiceNow ticket for NFS disk quota increases, and LPC coordination does not approve NFS quota increase requests.
LPC Collaborative Group
- To check membership in a LPC Collaborative group (access for EOS and/or NFS space in the group), at the cmslpc-sl7 command line do:
getent group | grep ^lpcgroupname. The output will look something like the following (partial output):
- Group members of
lpcgroupnameare listed in the first line and are:
- To request LPC Collaborative group access (for EOS and/or NFS space usage), fill out the LPC Service Portal form: "Modify Account on LPC Collaborative Group".
- Use your Fermilab Services Username and password, or Kerberos (after one time configuration of browser following instructions on the Fermilab Single Sign on Page.)
- Choose "Add" to add a user to the group. You can fill out this form for yourself or another user.
- Be sure to include the reason to join. The LPC Collaborative group convener(s) will approve or reject.
- Once the account is added, it takes ~1-3 hours (during Fermilab business hours, therefore up to 1 business day depending on timing) to propagate to all the cmslpc systems.
LPC Collaborative Group usage
- If you are a member of one of the LPC Collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup(quota limit set;
ksu lpcgroupnameto the LPC Collaborative group user before writing) - note that for
xrdcptransfers you do NOT
ksu, do them as yourself.
- note that the
/store/group/lpcgroupnamearea is a soft link to
/store/user/lpcgroupnameon EOS T3_US_FNALLPC, as CRAB requires the
group, but the local filesystem puts everything in
- File permissions for files made in the group account NFS areas are by default only allowed to write by LPC Collaborative group members (either as
ksu lpcgroupnameor as yourself from Crab3 jobs or xrdcp.) By default, all CMS users with a Fermilab account can read the files on NFS, and all EOS files can be read by all CMS users with a valid grid certificate via xrootd.
- LPC Collaborative group areas are created for broad collaboration across institutions. All LPC Collaborative group requests, both for new accounts and existing quota increases have to be approved by LPC coordinators.
- To check a list of existing LPC Collaborative groups, in Fermilab ServiceNow LPC Service Portal, after authenticating with your Fermilab Single Sign on,
- In the LPC Service Portal, choose: Modify Account on LPC Collaborative Group (Show me how - outdated 9/25/2018[β]).
- The pull-down menu lists the currently available LPC Collaborative groups.
- Note that in the cluster command line you may use
getent group, but that also lists system groups as well as LPC Collaborative Groups.
- You may also Contact LPC Support for more information about existing groups
Membership of LPC Collaborative Group
Add user to LPC Collaborative Group
Request a new LPC Collaborative Group Account - policies
- Group area name (must have "lpc" at the start now):
- Your institution
- Requested allocation of space in EOS (logical space only, that is, not including recplication)
- Approximate number of members (this can include future members who don't yet have FNAL accounts
- Number of members from non-US institutions
- Who will be the approver(s)/convener(s) of the group? You can check the box if that includes yourself, and you will put in names which will be matched against the FNAL database for convener(s)
- Members. You will put in names that are matched against the FNAL database for users to add to the group when it's created
- Reason for request (has to be collaborative in nature across multiple institutes):
Convener(Approvers) of LPC Collaborative Group
- Each LPC Collaborative group has one or more conveners who is in charge of the following:
- Approve or Reject add/Remove new member tickets.
- Note that email approvals ONLY work from actual @fnal.gov emails
- Those without a real @fnal.gov email them must use the email links:
Click here to view Approval Request: LINKor
Click here to view the details of the Requested Item: RITMlinks after authenticating with Fermilab Single Sign On - you will see a menu for "Approve" or "Deny".
- If you cannot find your approvals, you can also see a list in the LPC Service Portal in the upper right "Approvals", alternately, a co-convener may have already approved the request
- Marguerite Tonjes as LPC Computing Support is added as co-convener for all groups to assist in case of missing approvals or missing conveners
- Information about how to find LPC group EOS quota at this link
- Advanced tools to find out EOS usage (see the eosdu script as well as the "find" commands to locate files more than a year old or large files). In all cases do NOT use the fuse mount for file removal, as that may crash EOS.
- Conveners can be added or removed for a group with the LPC Service Portal ticket Add/Remove an LPC Convener. This action will be approved by LPC Coordinators
EOS disk spaceTo understand how much space a user or LPC Collaborative group has on the EOS (T3_US_FNALLPC
/store/user) filesystem, consult the dedicated EOS Mass Storage page.
NFS disk space and condor batchThe cmslpc condor batch system has worker nodes which as of October 1, 2017, do not have any of the above NFS disks mounted on them. This page describes good file I/O for condor batch jobs at the CMS LPC CAF.
AFS mountsAs of Spring, 2018, the LPC CAF (cmslpc cluster) no longer mounts
/afsdirectories on interactive or worker nodes. You may still access the
/afsfilesystem from CERN lxplus, but be aware that it is being phased out.