U.S. CMS
Search
uscms.org  www 

Disk Quota and Network mounted FileSystems (NFS)

Tier 1 Facility provides several file sytems for its users. These file sytems are configured in different ways for different purposes. Information is provided below on purpose and configuration differences, quota and usage statistics, instructions on how to check your quota and options available to you if you should run out of quota.

What is the maximum quota?

The LPC coordinators have decided to set an individual user quota of 2 TB per user in EOS /store/user, 100GB in NFS /uscms_data, and 2GB in /uscms home.

Most requests for increase in space must be approved by LPC Coordinators and can be submitted as a ServiceNow Request.


NFS Disk Space

/uscms 

Backup snapshots for /uscms are daily starting at 18:00 and kept for 4 days.
  Purpose: Home areas, area small quota (2GB), backed up to tape.
Note that as of May 4, 2017, the home directory path is /uscms/home/u/username, where /u/ is the first letter of your username. There is a soft link to the previous path in place.

To recover accidentally removed files, Submit a "I'm having a problem" trouble ticket with Fermilab LPC Service Portal, using your Fermilab services username and password, report what file(s), and what date they were removed for recovery. Note that files in ~username/nobackup are on a different disk and not backed up, as discussed below.

Nobackup Data Areas: /uscms_data/d1

There are several data areas located on the NFS disk. The /uscms_data/d1 area is simply a collection of symbolic links that point to a user's actual data area (currently /uscms_data/d2 or /uscms_data/d3)
Its best if users use the /uscms_data/d1 (or ~username/nobackup) path to access their data just in case the actual data area needs to be moved to a different file system for space reasons.
Individual user quotas are set to a default of 100GB in the data areas, LPC collaborative group default quota is 1TB. There are NO BACKUPS for this area, and no way to recover accidentally deleted files.

/uscms_data/d2

No tape backups
No snapshots

  Purpose Data area with quotas ( large for LPC groups), not backed up to tape

/uscms_data/d3

No tape backups
No snapshots

  Purpose: Data area with quotas (large for LPC groups), not backed up to tape

/uscmst1b_scratch/lpc1/3DayLifetime

No tape backups
No snapshots

  Purpose: Data area WITHOUT quota for LPC groups and for 3DayLifetime; not backed up in any fashion. Do not store unreproducible work in this area - there are NO BACKUPS. The 3DayLifetime area (/uscmst1b_scratch/lpc1/3DayLifetime) is accessible by ALL users. If you do not have a directory, create one with mkdir /uscmst1b_scratch/lpc1/3DayLifetime/username (with your username). As the name suggests, files stored here are automatically removed after 3 days. Since there are no quotas on this file system we expect users to clean up their directories on a regular basis. If the file system begins to get full we will send out email to the users asking them to clean up their areas.

Checking your quota usage on NFS mounted areas:

  Users can use the UNIX "quota" command to check their disk/quota usage. The output includes information on all NFS mounted file systems even those that may not have quotas enforced.
An example - over quota on nobackup:  

[username@cmslpc42 ]$ quota -s
Disk quotas for user username (uid 55555): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
cms-nas-0.fnal.gov:/uscms
                  1158M    0  2048M          25746k       0       0        
cmsnfs-scratch1.fnal.gov:/uscms/data3/
                  105G*   100G    120G    6 days     199k       0       0


  • The -s option tells the quota command to try to use units for showing the usage/limit output.
  • The most pertinent number fields in this output are the first and third. The first shows how much disk is being used and the third shows what your limit is set to.
  • In this example, the * is indicating that this user is over quota(second item) on ~/nobackup, however they will still be able to write files until they reach the limit within the grace period (fourth field).
  • The grace period listed in this example is 6 days as the user just went over quota: once that time has passed, the user will not be able to write more than 100GB (their quota).
    • If, instead, grace lists none, the user will be unable to write over the 100GB quota.
  • Note that the /uscms area for your home directory has only a hard limit, so there will be no warning or grace, and once you are over limit you will not be able to write to that area.
  • The first mounted disk listed is your home area, the second is your (soft linked) nobackup area, which is centrally linked as /uscms_data/d1, which is mounted as /uscms/data2, or /uscms/data3, and soft linked as /uscms_data/d2 or /uscms_data/d3

Here is an example of a user who is over quota on /uscms/homes:  

[username@cmslpc42 ]$ quota -s
Disk quotas for user username (uid 55555): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
cms-nas-0.fnal.gov:/uscms
                  2048M       0   2048M          25960k       0       0     
cmsnfs-scratch1.fnal.gov:/uscms/data3/
                 47720M    100G    120G            424k       0       0 
                   

Note that the "blocks" used is the same size as the "limit", and there is no *. This is a hard cutoff, and if you exceed it, you will see an error message is like this:

[username@cmslpc42 ~/temp]$ cp -pr testFile.root test.root
cp: closing `test.root': Disk quota exceeded

Going over quota on your home directory will have unintended consequences like not being able to write an ~/.Xauthority file upon login and use X-window forwarding, so you will need to cleanup and/or move files to other filesystems.

What to do if you exceed your quota limit on NFS:

  1. Remove uneeded files
  2. Utilize the 3DayLifetime area in /uscmst1b_scratch/lpc1/3DayLifetime/username. As the name implies any data stored here is automatically removed after 3 days.
  3. Copy data to EOS (recursive examples)
  4. If you are a member of one of the LPC collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
  5. If all of the above are not sufficient, you can request more disk space in EOS with the following ServiceNow ticket below which will be approved or rejected by LPC Coordinators. Ensure you have exhausted all the possibilities above including the 3Daylifetime, LPC Collaborative groups, and EOS areas.

LPC Collaborative Group

    Membership of LPC Collaborative Group

  • To check membership in a LPC Collaborative group (access for EOS and/or NFS space in the group), at the cmslpc-sl6 command line do: getent group | ^grep lpcgroupname. The output will look something like the following (partial output):
    • lpcgroupname:x:9955:fnalusername1,fnalusername2,fnalusername3,fnalusername4
      us_cms:x:5063:cms1,cmsfnal,cmsmuon,cmspxl,cmsroc_hcal,cmstb04,cmsvbf,lpcanex
    • Group members of lpcgroupname are listed in the first line and are: fnalusername1,fnalusername2,fnalusername3,fnalusername4
  • Add user to LPC Collaborative Group

  • To request LPC Collaborative group access (for EOS and/or NFS space usage), fill out the ServiceNow form in ServiceNow: search for "CMS", choose: Add Account to LPC Group (Show me how [β]). Be sure to include the reason to join, this will be approved by the specific LPC Collaborative Group Convener(s). Once the account is added, it takes ~1-2 hours (during Fermilab business hours) to propagate to all the cmslpc systems.
  • LPC Collaborative Group usage

  • If you are a member of one of the LPC Collaborative groups, you can utilize the storage areas allocated to that LPC Collaborative group on NFS or EOS. These LPC Collaborative group areas are located at:
    • ~lpcgroupname/nobackup  (quota limit set; ksu lpcgroupname to the LPC Collaborative group user before writing) - note that for xrdcp transfers you do NOT ksu, do them as yourself.
    • /uscmst1b_scratch/lpc1/3DayLifetime/lpcgroupname
    • /store/group/lpcgroupname
      • note that the /store/group/lpcgroupname area is a soft link to /store/user/lpcgroupname on EOS T3_US_FNALLPC, as CRAB requires the group, but the local filesystem puts everything in user
    • File permissions for files made in the group account NFS areas are by default only allowed to write by LPC Collaborative group members (either as ksu lpcgroupname or as yourself from Crab3 jobs or xrdcp.) By default, all CMS users with a Fermilab account can read the files on NFS, and all EOS files can be read by all CMS users with a valid grid certificate via xrootd.

    Request a new LPC Collaborative Group Account - policies

  • LPC Collaborative group areas are created for broad collaboration across institutions. All LPC Collaborative group requests, both for new accounts and existing quota increases have to be approved by LPC coordinators.
    • To check a list of existing LPC Collaborative groups, in Fermilab ServiceNow, after authenticating with your Services username and password, the form:
    • A ServiceNow form is being developed for creating a new LPC Collaborative group account (early 2018), in the meantime you can send your request to Marguerite Tonjes who will forward it to LPC Coordinators for approval. The request should contain:
      • Group area name (must have "lpc" at the start now): lpcgroupname
      • Requested allocation of space in EOS (give actual and logical)
      • Names and institutions of the users, with the number of users (split as US and international)
      • FNAL usernames of users
      • Who will be the approver(s)/convener(s) of the group?
      • Reason for request (has to be collaborative in nature):

EOS disk space

To understand how much space a user or LPC Collaborative group has on the EOS (T3_US_FNALLPC /store/user) filesystem, consult the dedicated EOS Mass Storage page.

NFS disk space and condor batch

The cmslpc condor batch system has worker nodes which as of October 1, 2017, do not have any of the above NFS disks mounted on them. This page describes examples of modifying cmslpc condor batch scripts to not use the above NFS disks.

AFS mounts

As of Spring, 2018, the LPC CAF (cmslpc cluster) no longer mounts /afs directories on interactive or worker nodes. You may still access the /afs filesystem from CERN lxplus, but be aware that it is being phased out.
Webmaster | Last modified: Thursday, 19-Apr-2018 09:58:11 CDT