U.S. CMS
Search
uscms.org  www 

Computing Environment Setup: Batch Systems

The batch system available for users of the CMS LPC CAF is condor which allows the user to submit jobs into the lpc batch farm. On this page we will describe how to use this batch system.

NOTE: If you are changing batch code from pre-2019 to the latest, check out the Condor refactor guide

The preferred way to access the most CPU (including the CMS LPC) is through CRAB

For any information not covered below, visit the condor user's manual. Find the version of condor running on lpc with condor_q -version

How do I use CRAB to submit batch jobs?

Note that this is the preferred method to access the most CPU

Guides:

How do I use CMS Connect to submit batch jobs?

How do I use Condor to submit to the cmslpc batch farm?

What are the pre-requisites for all condor jobs on the cmslpc farm?

  • Submitting and running jobs, as well as all condor queries now require the user to have a valid grid proxy in the CMS VO
  • When you obtain your proxy (voms-proxy-init --valid 192:00 -voms cms), it will be saved in your home directory where it can be read by the condor job on the worker node
  • The CMS LPC CAF system must know about the association of your grid certificate and FNAL username. This is usually done as part of the Enable EOS area ticket. You must do this at least once for your grid certificate to be associated with your account, which also lets you write to your EOS area from CRAB.
    • Go to the LPC Service Portal: https://fermi.servicenowservices.com/lpc
      • Use Fermilab SSO, paying attention to instructions to configure your browser once for Kerberos login. Note that your Services credentials are different from Kerberos
      • Choose "CMS Storage Space Request", and select "Enable" under "Action Required"
      • It will prompt you for for your DN (Your DN is the result of voms-proxy-info --identity) and CERN username. Submit that to register your DN. The grid certificate will be known to the nodes within 1-3 hours during FNAL business hours.

A simple condor example: jdl file

The first step to using the condor system is writing the condor jdl (job description language) file. This file will tell the system what you want it to do and how. Below is an example, which will run a system program that will sleep for one minute, then quit. Use your favorite text editor to create the sleep-condor.jdl file with the following contents. Click on any of the green lines below for an explanation of what it does. Note that condor will automatically transfer individual files (not directories) from your job on the worker node to your job submission area unless you specify otherwise


universe = vanilla
Executable = sleep.sh
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
Output = sleep_$(Cluster)_$(Process).stdout
Error = sleep_$(Cluster)_$(Process).stderr
Log = sleep_$(Cluster)_$(Process).log
Arguments = 60
Queue 2

A simple condor example: sh file

The next step is to create the executable, in this case a shell script called sleep.sh. Create the file below in your favorite text editor in the same directory as your sleep-condor.jdl file.
#!/bin/bash
set -x
# Sleep
sleep $1
echo "##### HOST DETAILS #####"
echo "I slept for $1 seconds on:"
hostname
date

Submit, monitor, and remove your condor job


After you've created the jdl file sleep-condor.jdl, the shell script sleep.sh, and authenticated your grid certificate to the CMS VO, you can submit it to the condor system using the command condor_submit followed by the name of your submit description file (jdl), in this example's case "sleep-condor.jdl":

condor_submit sleep-condor.jdl

Your output should look something like this:

[username@cmslpc132 ~]$ condor_submit sleep-condor.jdl
Querying the CMS LPC pool and trying to find an available schedd...
Attempting to submit jobs to lpcschedd3.fnal.gov
Submitting job(s)..
2 job(s) submitted to cluster 76596545.

You can see the status of all jobs you have submitted (unless they have completed) to all the possible schedulers (lpcschedd3.fnal.gov for example) with the following command:

condor_q Your queue ought to show the processes you just submitted, they may be idle for up to a minute or so, maybe longer if the system is very busy:

[username@cmslpc132 condor]$ condor_q

-- Schedd: lpcschedd3.fnal.gov : <131.225.188.235:9618?... @ 09/08/22 15:12:37
 ID          OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD
76596545.0   tonjes          9/8  15:12   0+00:00:00 I  0    0.0 RunAN.sh 2018 MC

Total for query: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended 
Total for tonjes: 1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended 
Total for all users: 1208 jobs; 0 completed, 41 removed, 898 idle, 203 running, 66 held, 0 suspended



-- Schedd: lpcschedd4.fnal.gov : <131.225.189.251:9618?... @ 09/08/22 15:12:37
 ID          OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD

Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended 
Total for tonjes: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended 
Total for all users: 1588 jobs; 0 completed, 18 removed, 1446 idle, 114 running, 10 held, 0 suspended



-- Schedd: lpcschedd5.fnal.gov : <131.225.204.62:9618?... @ 09/08/22 15:12:37
 ID          OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD

Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended 
Total for tonjes: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended 
Total for all users: 1518 jobs; 0 completed, 43 removed, 902 idle, 497 running, 76 held, 0 suspended

Note that there is an -allusers option lets you see condor jobs submitted by all users on the schedulers (condor_q -allusers). To see just your own, omit that option.

  • To understand job status (ST) from condor_q, you can refer to the condor user manual (8.8) - see the condor job troubleshooting page to understand why a job is in each status:
    • "Current status of the job, which varies somewhat according to the job universe and the timing of updates. H = on hold, R = running, I = idle (waiting for a machine to execute on), C = completed, X = removed, S = suspended (execution of a running job temporarily suspended on execute node), < = transferring input (or queued to do so), and > = transferring output (or queued to do so)."
  • Be sure to know which scheduler your job was submitted to and is running on.

You can also monitor jobs with the landscape at FNAL LPC web monitor, keep in mind the results may be delayed up to 5 minutes.

You can specifically get a list of all the jobs and their status for a yourself from any interactive machine using this command:

condor_q

You can find the job numbers for all users with this command:

condor_q -allusers

If you want to view the entire queue, you can use the following command:

condor_status -submitters

This gives all the jobs from all users, including jobs coming in from CRAB and CMS connect into pilots, and jobs in the T1_US_FNAL workers. Note that cmslpc jobs submitted on local condor will not run on T1_US_FNAL workers. See the condor system status web page for some command line methods to find out only jobs for the local batch.

If you need job numbers for a user who isn't yourself, you will need to specify account group with username (we read the group from the above command):

condor_q -submitter group_cmslpc.username

To cancel a job type condor_rm followed by the job number, for this example, 60000042.0 and 60000042.1

condor_rm -name lpcschedd3.fnal.gov 60000042

If you don't remember your scheduler name for that job, use the condor_q command from above, it will tell you what scheduler your job was submitted from.

If you want to remove all your jobs that went to all the schedulers, you would have do do the following:
condor_rm -name lpcschedd3.fnal.gov -all; condor_rm -name lpcschedd4.fnal.gov -all; condor_rm -name lpcschedd5.fnal.gov -all

If you wish to see the end of a current running job stdout, you can use the condor_tail command together with the job number. The results will give you ~20 lines of output, it is cut here for space in this example:

condor_tail 60000042.1

== CMSSW: 09-Mar2019 22:26:32 CST  Successfully opened file root://cmseos.fnal.gov//store/user/username/myfile.root
== CMSSW: Begin processing the 1st record. Run 1, Event 254, LumiSection 6 at 09-Mar-2019 22:30:12.001 CST
== CMSSW: Begin processing the 101st record. Run 1, Event 353, LumiSection 8 at 09-Mar-2019 22:30:28.673 CST
== CMSSW: Begin processing the 201st record. Run 1, Event 451, LumiSection 10 at 09-Mar-2019 22:30:32.670 CST

Note that you can also query the -stderr instead of the default stdout, for instance:

condor_tail -stderr 60000042.1

More options can be found with condor_tail -help


Consult the condor batch troubleshooting page for more on how to troubleshoot job problems.
Consult the condor system status web page for more on how to monitor the condor batch system.

Details of the condor submit file (jdl)


universe = vanilla
The universe variable defines an execution environment for your job, in this example we use the vanilla universe which has the least amount of built in services, but also the least amount of restrictions. For a complete list of universes and what they do, so the condor user's manual. The cmslpc does NOT support the standard universe, and thus does not have condor_compile.
BACK

Executable = sleep.sh
This is the program you want to run. If the program is in the same directory as your batch file, just the name will work, example: yourscript.sh. If it is in a different directory than your batch file then you must give the pathname, example: myscripts/yourscript.sh runs the script yourscript.sh located in the directory myscripts.

  • Note that we do not use /bin/sleep as our executable, as system level executables are read from the remote scheduler nodes. The lpcschedd*.fnal.gov nodes run SL7, and your jobs run on SL7 Docker containers.
  • In general, either use a shell script (best because you can add useful comments about the worker node environment), or your own compiled executable
  • The executable you specify in the Executable line in the condor jdl is automatically sent to the worker node, you do not have to specify it in transfer_input_files

BACK

should_transfer_files = YES
when_to_transfer_output = ON_EXIT

These options tell condor to transfer files to/from the batch job. If these options are not activated then you must provide input through some other means and extract the output yourself. Examples below show how to handle input/output to/from FNAL EOS within a shell script. Users cannot do direct read/write from the NFS filesystems within a condor job as NFS is not mounted on the condor worker nodes. Follow this link further down on the page to learn more about condor I/O and how to best manage it for the cmslpc systems (transfer files to/from FNAL EOS within the shell script).
BACK

Output = sleep_$(Cluster)_$(Process).stdout
This directs the standard output of the program to a file, in other words, everything that would normally be displayed on the screen, so that you can read it after it is finished running. Where you see $(Cluster) condor will substitute the job number, and $(Process) will become the process number, in this case, 0-1.
BACK

Error = sleep_$(Cluster)_$(Process).stderr
This is the same as the Output line, except it applies to standard error, this is extremely useful for debugging or figuring out what is going wrong (all most always something). Where you see $(Cluster) condor will substitute the job number, and $(Process) will become the process number, in this case, 0-1.
BACK

Log = sleep_$(Cluster)_$(Process).log
The log file contains information about the job in the condor system, the ip address of the computer that is processing the job, the time it starts and finishes, how many attempts were made to start the job and other such data. It is recommended to use a log file, where you see $(Cluster) condor will substitute the job number, and $(Process) will become the process number, in this case, 0-1.
BACK

Arguments = 60
Here you put any command line arguments for your program, if you have none, exclude this line. In this example the program needs one argument for the number of seconds to wait. This argument tells the program to wait for one minute, and is used within the shell script as $1.
BACK

Queue 2
This is how many times you want to run the program, without this line the program will not run. The processes will be numbered starting at zero, so in this example they will be: 0, and 2. In case you only want one job, use Queue 1.
BACK

Details of the condor shell script (sh)


Note that a shell script is used here because we are using a system executable. When condor creates the job on the worker node, it transfers the executable from the scheduler, lpcschedd*.fnal.gov (where * is a number). The scheduler nodes run SL7, so will transfer a SL7 system executable, even if you wish to run in a different operating system Singularity container on the worker node. A shell script will use the proper operating system for the container.

#!/bin/bash
This line is required for the start of all shell scripts within the condor refactor, to tell the Docker container what type of shell to use. You cannot have anything before this line, not even comments (#). Errors when this is missing is described at the troubleshooting page. BACK

set -x
This line is optional. It has the benefit of reporting to you in your .stderr file the contents of the shell script that the condor worker node is running. BACK

# Sleep
The line which starts with # indicates a comment, and is ignored by the shell script. BACK

sleep $1
The system executable sleep command is used with the argument $1, which is passed from the Arguments 60 line from the sleep-condor.jdl file. BACK

echo "##### HOST DETAILS #####"
echo "I slept for $1 seconds on:"
hostname
date
BACK

How do I manage file input/output from within Condor?

Overall input/output guidelines

  • Input to condor from NFS (automatic file transfer) is limited (sandbox) to 1GB ( Nov. 20, 2019)
  • Your running job is limited to 40GB of disk usage on the condor worker node
  • Do not add disk requirements in your condor.jdl file unless you are sure you need more as it affects your priority and books resources that aren't used
  • Condor creates its working area on the condor worker node in the environment variable _CONDOR_SCRATCH_DIR
  • Note that condor will automatically transfer individual files (not directories) from your job on the worker node to your job submission area on NFS and possibly overload NFS unless you specify otherwise, or remove the files from the condor scratch area in your shell script. Wherever possible only transfer files to FNAL EOS within the script.

EOS file input and/or output

The FNAL EOS filesystem is accessible on the condor worker nodes through xrootd. It is not mounted as the EOS FUSE mount on the lpc schedulers (lpcschedd*.fnal.gov). Therefore, we cannot use condor file I/O, and must do our in outside of condor and within condor batch scripts on the condor worker node. Recall that EOS works best on individual files of 1-5GB in size.
  • Input:
  • During the job:
  • Output
    • This example to be used in a bash .sh script will loop over all root files in a single directory and output them to a user specified EOS directory, removing them from the local condor working area after they are transferred.
    • 
      ### Now that the cmsRun is over, there is one or more root files created
      echo "what directory am I in?"
      pwd
      echo "List all root files = "
      ls *.root
      echo "List all files"
      ls -alh
      echo "*******************************************"
      OUTDIR=root://cmseos.fnal.gov//store/user/username/MyCondorOutputArea/
      echo "xrdcp output for condor to "
      echo $OUTDIR
      for FILE in *.root
      do
        echo "xrdcp -f ${FILE} ${OUTDIR}/${FILE}"
        echo "${FILE}" 
        echo "${OUTDIR}"
       xrdcp -f ${FILE} ${OUTDIR}/${FILE} 2>&1
        XRDEXIT=$?
        if [[ $XRDEXIT -ne 0 ]]; then
          rm *.root
          echo "exit code $XRDEXIT, failure in xrdcp"
          exit $XRDEXIT
        fi
        rm ${FILE}
      done
      

NFS file input and/or output

  • Condor has options availble to automatically transfer input files to the worker where the job runs and then copy any output files back to the directory you submit from (the output file copying happens by default)
  • You are limited to 1GB input transfer through condor from local NFS directories
  • Any large files should be transferred to/from EOS, as described above, also compress CMSSW and exclude caches and large files even when using EOS
  • The options for telling condor to copy files into and out of your job are:
    • Should_Transfer_Files = YES
      Transfer_Input_Files = file1, file2
      Transfer_Output_Files = outfile.root
      WhenToTransferOutput = ON_EXIT 
      
    • In addtion you need to make sure you use the correct pathname for input files, output files will be transferred back to the directory you submitted jobs from. It is best to transfer large files to FNAL EOS using xrdcp instead.
    • Beware that if you do not specify which files to be transferred out, you may transfer large core dumps, tarballs, and other files which if happening too many in parallel will overload and freeze up the FNAL NFS system.
    • Please take into account that the NFS disks are not mounted on the condor worker nodes as of Oct. 1, 2017


    BACK

    How do I compress (tar) my custom CMSSW so that it's small and not overloading the network when transferred to many jobs at the same time?

    Simple compression (tar)

    Example (use your own CMSSW)
    tar -zcvf CMSSW_10_6_4.tgz CMSSW_10_6_4
    

    Exclude large files from your tar, like root files, caches, etc...

    • Note that you can exclude large files for instance from your tar with the following argument:
      --exclude="Filename*.root"
    • Note that you can exclude CMSSW caches for instance with a command like this one:
      tar --exclude-caches-all --exclude-vcs -zcf CMSSW_10_6_4.tar.gz -C CMSSW_10_6_4/.. CMSSW_10_6_4 --exclude=src --exclude=tmp
    • When using --exclude-caches-all, you should mark directories you want to exclude with a CACHEDIR.TAG file, for more, see these links for information and examples:
    • Tips: always check the size of your tar.gz file with ls -alh after making it
    • Common errors: Beware of tarring your CMSSW and output the tar into that CMSSW. Beware of large .git directories. Do not save the condor output and/or log files (.stdout, .stderr, .log, .root) to inside that CMSSW you are tarring.
    • WARNING: Large files being transferred from NFS disk ~/nobackup to many parallel condor jobs will cause Input/Output errors on the condor worker nodes
    BACK

    How do I run a condor job which sets up a bare CMSSW environment?

    In your shell script (bash_condor.sh), have the following content. Note that some of the earlier lines are good to find out more about the worker node the particular job runs on.
    
    #!/bin/bash
    echo "Starting job on " `date` #Date/time of start of job
    echo "Running on: `uname -a`" #Condor job is running on this node
    echo "System software: `cat /etc/redhat-release`" #Operating System on that node
    source /cvmfs/cms.cern.ch/cmsset_default.sh 
    scramv1 project CMSSW CMSSW_10_6_4 # cmsrel is an alias not on the workers
    ls -alrth
    cd CMSSW_10_6_4/src/
    eval `scramv1 runtime -sh` # cmsenv is an alias not on the workers
    echo $CMSSW_BASE "is the CMSSW we created on the local worker node"
    cd ${_CONDOR_SCRATCH_DIR}
    pwd
    echo "Arguments passed to the job, $1 and then $2: "
    echo $1
    echo $2
    ### cmsRun mycode.py $1 $2
    
    BACK

    How do I run a condor job which sets up a custom compiled CMSSW environment?

    In your shell script (bash_condor.sh), have the following content. Note that some of the earlier lines are good to find out more about the worker node the particular job runs on.
    
    #!/bin/bash
    echo "Starting job on " `date` #Date/time of start of job
    echo "Running on: `uname -a`" #Condor job is running on this node
    echo "System software: `cat /etc/redhat-release`" #Operating System on that node
    # bring in the tarball you created before with caches and large files excluded:
    xrdcp -s root://cmseos.fnal.gov//store/user/username/CMSSW_10_6_4.tgz .
    source /cvmfs/cms.cern.ch/cmsset_default.sh 
    tar -xf CMSSW_10_6_4.tgz
    rm CMSSW_10_6_4.tgz
    cd CMSSW_10_6_4/src/
    scramv1 b ProjectRename # this handles linking the already compiled code - do NOT recompile
    eval `scramv1 runtime -sh` # cmsenv is an alias not on the workers
    echo $CMSSW_BASE "is the CMSSW we have on the local worker node"
    cd ${_CONDOR_SCRATCH_DIR}
    pwd
    echo "Arguments passed to the job, $1 and then $2: "
    echo $1
    echo $2
    ### cmsRun mycode.py $1 $2
    
    BACK

    How do I run SL6 (slc6) jobs or Singularity jobs on condor batch nodes?

    Singularity
    Note: For all Run2 analyses, CMSSW software versions should be working in SL7, so for most users you do NOT need Singularity containers.
    Keep in mind that batch singularity may not work the same as interactively (this is true of the cmslpc-cc6 command), so develop your singularity condor jobs with single job tests until you have them working properly (unit testing). Note: The HTCondor user working directory (CONDOR_SCRATCH_DIR) is mounted under /srv within the container, and environment variables are correctly updated in the container based on the new path.
    Some documentation about CMSSW singularity used in software development can be found at this link: http://cms-sw.github.io/singularity.html. A one line command usable for cmslpc can be found at the Setup Software: Singularity link. Keep in mind that the CMSSW development singularity containers are very large and intended to be called from the command line. We use the smaller Singularity containers which have been developed for MC production: /cvmfs/singularity.opensciencegrid.org/cmssw/cms:rhel6

    • Send the Singularity Image via the Condor ClassAd
      In the condor jdl file (the one you condor_submit job.jdl), you can add the cvmfs reference to the SingularityImage. Keep in mind that transfers from NFS are limited and if the same file is accessed by many individual condor jobs it can severely degrade the NFS (nobackup) area, it is best to load a singularity image from /cvmfs. In the condor jdl file (the one you condor_submit job.jdl), add this line:
      +SingularityImage = "/cvmfs/singularity.opensciencegrid.org/cmssw/cms:rhel6"

    Note: Singularity can be run interactively with commands such as cmssw-cc6. Documentation about CMSSW singularity can be found at this link: http://cms-sw.github.io/singularity.html

    BACK

    How do I troubleshoot my condor job problems?

    In a separate page: Condor batch system troubleshooting


    BACK

    Status and monitoring of condor batch jobs

    A separate page has the information for Condor status and monitoring


    BACK

    Advanced condor topics (multicore, more memory, etc.)

    Advanced condor topics such as: higher memory, more cpu, more disk space, and partitionable slots can be found in the separate Batch System Advanced Topics web page.

    BACK


    Webmaster | Last modified: Friday, 16-Sep-2022 13:12:51 CDT