Disk Quota and Network mounted FileSystems (NFS)
The FNAL LPC Analysis Cluster provides several file sytems for its users. These filesytems are configured in different ways for different purposes.
Information is provided below on purpose and configuration differences,
quota and usage statistics, instructions on how to check your quota and
options available to you if you should run out of quota.
What is the maximum personal quota?
The LPC Coordinators have decided to
set an individual user quota of 2 TB per user in EOS /store/user
, 200GB in NFS /uscms_data
, and 2GB in /uscms
home.
Requests for increase in space of EOS must be approved by LPC Coordinators and can be submitted as a ServiceNow Request.
- This is the link with instructions for EOS quota increase.
- NFS quota increases are not given.
NFS Disk Space
/uscms
Backup snapshots for/uscms
are daily starting at 18:00 and kept for 4 days.
Purpose: Home areas, area small quota (2GB), backed up to hidden snapshot directory.
Note that as of May 4, 2017, the home directory path is
/uscms/home/u/username
, where /u/
is the first letter of your username
.
There is a soft link to the previous path in place.
To recover accidentally removed files, Submit a "I'm having a problem" trouble ticket with Fermilab LPC Service Portal, using your Fermilab single sign on, report what file(s), and what date they were removed for recovery. Note that files in
~username/nobackup
are on a different disk and not backed up, as discussed below. Files in EOS are not backed up either.Nobackup Data Areas: /uscms_data/d1
There are several data areas located on the NFS disk. The/uscms_data/d1
area is simply a collection of
symbolic links that point to a user's actual data area (currently /uscms_data/d2
or /uscms_data/d3
)
Its best if users use the
/uscms_data/d1
(or ~username/nobackup
) path to access their data just in case the actual data area
needs to be moved to a different file system for space reasons. Individual user quotas are set to a default of 200GB in the data areas, LPC collaborative group default quota is 1TB. There are NO BACKUPS for this area, and no way to recover accidentally deleted files.
/uscms_data/d2
No backupsNo snapshots
Purpose Data area with quotas, not backed up to snapshot
/uscms_data/d3
No tape backupsNo snapshots
Purpose: Data area with quotas, not backed up to snapshot
/uscmst1b_scratch/lpc1/3DayLifetime
No tape backupsNo snapshots
Purpose: Data area WITHOUT quota; not backed up in any fashion. Do not store unreproducible work in this area - there are NO BACKUPS. The 3DayLifetime area (
/uscmst1b_scratch/lpc1/3DayLifetime
) is accessible by ALL users.
If you do not have a directory, create one with mkdir /uscmst1b_scratch/lpc1/3DayLifetime/username
(with your username
).
As the name suggests, files stored here are automatically removed after 3 days. Since there are no quotas on this file
system we expect users to clean up their directories on a regular basis. If the file system begins to get full we
will send out email to the users asking them to clean up their areas.
Checking your quota usage on NFS mounted areas:
Users can use the UNIX "quota" command to check their disk/quota usage. The output includes information on all NFS mounted file systems even those that may not have quotas enforced.An example - over quota on
nobackup
:
[username@cmslpc333 ]$ quota -s
Disk quotas for user username (uid 55555):
Filesystem blocks quota limit grace files quota limit grace
cms-nas-0.fnal.gov:/uscms
1158M 0 2048M 25746k 0 0
cmsnfs-scratch1.fnal.gov:/uscms/data3/
212G* 200G 220G 6days 538k 0 0
- The
-s
option tells the quota command to try to use units for showing the usage/limit output. - The most pertinent number fields in this output are the first and third. The first shows how much disk is being used and the third shows what your limit is set to.
- In this example, the
*
is indicating that this user is over quota(second item) on~/nobackup
, however they will still be able to write files until they reach the limit within the grace period (fourth field). - The grace period listed in this example is
6 days
as the user just went over quota: once that time has passed, the user will not be able to write more than 200GB (their quota). - If, instead, grace lists
none
, the user will be unable to write over the200GB
quota. - Note that the
/uscms
area for your home directory has only a hard limit, so there will be no warning or grace, and once you are over limit you will not be able to write to that area. - The first mounted disk listed is your home area, the second is your (soft linked)
nobackup
area, which is centrally linked as/uscms_data/d1
, which is mounted as/uscms/data2
, or/uscms/data3
, and soft linked as/uscms_data/d2
or/uscms_data/d3
Here is an example of a user who is over quota on
/uscms/homes
:
[username@cmslpc333 ]$ quota -s
Disk quotas for user username (uid 55555):
Filesystem blocks quota limit grace files quota limit grace
cms-nas-0.fnal.gov:/uscms
2048M 0 2048M 25960k 0 0
cmsnfs-scratch1.fnal.gov:/uscms/data3/
47720M 200G 220G 424k 0 0
Note that the "blocks" used is the same size as the "limit", and there is no
*
. This is a hard cutoff,
and if you exceed it, you will see an error message is like this:
[username@cmslpc333 ~/temp]$ cp -pr testFile.root test.root
cp: closing `test.root': Disk quota exceeded
Going over quota on your home directory will have unintended consequences like not being able to write an
~/.Xauthority
file upon login and use X-window forwarding, so you will need to cleanup and/or move files to other filesystems.What to do if you exceed your quota limit on NFS:
- Remove uneeded files
- Utilize the 3DayLifetime area in
/uscmst1b_scratch/lpc1/3DayLifetime/username
. As the name implies any data stored here is automatically removed after 3 days. - Copy data to EOS (recursive examples)
- If you are a member of one of the LPC collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup
(quota limit set;ksu lpcgroupname
to the LPC Collaborative group user before writing)/uscmst1b_scratch/lpc1/3DayLifetime/lpcgroupname
/store/group/lpcgroupname
- Note: To request LPC Collaborative group access, fill out the LPC Service Portal form: "Modify Account on LPC Collaborative Group". Use Fermilab Single Sign on (choosing Services username/password or Kerberos authentication after proper browser configuration for Kerberos). (Show me how - outdated 9/25/2018 [β]). Choose "Add" to add a user to the group. Be sure to include the reason to join. The LPC Collaborative group convener(s) will approve or reject. Once the account is added, it takes ~1-3 hours (during Fermilab business hours, therefore up to 1 business day depending on timing) to propagate to all the cmslpc systems.
- If all of the above are not sufficient, you can request more disk space in EOS with the following ServiceNow ticket below which will be approved or rejected by LPC Coordinators.
Ensure you have exhausted all the possibilities above including the 3Daylifetime, LPC Collaborative groups, and EOS areas.
- This is the link with instructions for EOS quota increase, as well as how to find out your current quota.
- There is no longer a ServiceNow ticket for NFS disk quota increases, and LPC coordination does not approve NFS quota increase requests.
LPC Collaborative Group
- To check membership in a LPC Collaborative group (access for EOS and/or NFS space in the group), at the cmslpc-el9 command line do:
getent group | grep ^lpcgroupname
. The output will look something like the following (partial output):lpcgroupname:x:9955:fnalusername1,fnalusername2,fnalusername3,fnalusername4 us_cms:x:5063:cms1,cmsfnal,cmsmuon,cmspxl,cmsroc_hcal,cmstb04,cmsvbf,lpcanex
- Group members of
lpcgroupname
are listed in the first line and are:fnalusername1,fnalusername2,fnalusername3,fnalusername4
- To request LPC Collaborative group access (for EOS and/or NFS space usage), fill out the LPC Service Portal form: "Modify Account on LPC Collaborative Group".
- Use your Fermilab Services Username and password, or Kerberos (after one time configuration of browser following instructions on the Fermilab Single Sign on Page.)
- Choose "Add" to add a user to the group. You can fill out this form for yourself or another user.
- Be sure to include the reason to join. The LPC Collaborative group convener(s) will approve or reject.
- Once the account is added, it takes ~1-3 hours (during Fermilab business hours, therefore up to 1 business day depending on timing) to propagate to all the cmslpc systems.
LPC Collaborative Group usage
- If you are a member of one of the LPC Collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
~lpcgroupname/nobackup
(quota limit set;ksu lpcgroupname
to the LPC Collaborative group user before writing) - note that forxrdcp
transfers you do NOTksu
, do them as yourself./uscmst1b_scratch/lpc1/3DayLifetime/lpcgroupname
/store/group/lpcgroupname
- note that the
/store/group/lpcgroupname
area is a soft link to/store/user/lpcgroupname
on EOS T3_US_FNALLPC, as CRAB requires thegroup
, but the local filesystem puts everything inuser
- File permissions for files made in the group account NFS areas are by default only allowed to write by LPC Collaborative group members (either as
ksu lpcgroupname
or as yourself from Crab3 jobs or xrdcp.) By default, all CMS users with a Fermilab account can read the files on NFS, and all EOS files can be read by all CMS users with a valid grid certificate via xrootd. - LPC Collaborative group areas are created for broad collaboration across institutions. All LPC Collaborative group requests, both for new accounts and existing quota increases have to be approved by LPC coordinators.
- To check a list of existing LPC Collaborative groups, in Fermilab ServiceNow LPC Service Portal, after authenticating with your Fermilab Single Sign on,
the form:
- In the LPC Service Portal, choose: Modify Account on LPC Collaborative Group (Show me how - outdated 9/25/2018[β]).
- The pull-down menu lists the currently available LPC Collaborative groups.
- Note that in the cluster command line you may use
getent group
, but that also lists system groups as well as LPC Collaborative Groups. - You may also Contact LPC Support for more information about existing groups
- To request a new LPC Collaborative group, in the LPC Service Portal, choose "Add LPC Collaborative Group". Do note that we expect you to review existing groups to see if one fits your needs, and we may have unused groups that would work. This request will go to LPC Coordinators for approval. The request will contain:
- Group area name (must have "lpc" at the start now):
lpcgroupname
- Your institution
- Requested allocation of space in EOS (logical space only, that is, not including recplication)
- Approximate number of members (this can include future members who don't yet have FNAL accounts
- Number of members from non-US institutions
- Who will be the approver(s)/convener(s) of the group? You can check the box if that includes yourself, and you will put in names which will be matched against the FNAL database for convener(s)
- Members. You will put in names that are matched against the FNAL database for users to add to the group when it's created
- Reason for request (has to be collaborative in nature across multiple institutes):
- Group area name (must have "lpc" at the start now):
Membership of LPC Collaborative Group
Add user to LPC Collaborative Group
Request a new LPC Collaborative Group Account - policies
Convener(Approvers) of LPC Collaborative Group
- Each LPC Collaborative group has one or more conveners who is in charge of the following:
- Approve or Reject add/Remove new member tickets.
- Note that email approvals ONLY work from actual @fnal.gov emails
- Those without a real @fnal.gov email them must use the email links:
Click here to view Approval Request: LINK
orClick here to view the details of the Requested Item: RITM
links after authenticating with Fermilab Single Sign On - you will see a menu for "Approve" or "Deny".- If you cannot find your approvals, you can also see a list in the LPC Service Portal in the upper right "Approvals", alternately, a co-convener may have already approved the request
- Marguerite Tonjes as LPC Computing Support is added as co-convener for all groups to assist in case of missing approvals or missing conveners
- Information about how to find LPC group EOS quota at this link
- Advanced tools to find out EOS usage (see the eosdu script as well as the "find" commands to locate files more than a year old or large files). In all cases do NOT use the fuse mount for file removal, as that may crash EOS.
- Conveners can be added or removed for a group with the LPC Service Portal ticket Add/Remove an LPC Convener. This action will be approved by LPC Coordinators
EOS disk space
To understand how much space a user or LPC Collaborative group has on the EOS (T3_US_FNALLPC/store/user
) filesystem, consult the dedicated EOS Mass Storage page.
NFS disk space and condor batch
The cmslpc condor batch system has worker nodes which as of October 1, 2017, do not have any of the above NFS disks mounted on them. This page describes good file I/O for condor batch jobs at the CMS LPC CAF.AFS mounts
As of Spring, 2018, the LPC CAF (cmslpc cluster) no longer mounts/afs
directories on interactive or worker nodes.
You may still access the /afs
filesystem from CERN lxplus, but be
aware that it is being phased out.