U.S. CMS
Search
uscms.org  www 

User Software and Computing

Computing Environment Setup : Software Setup

General CMS Software Environment

The CMS software environment for csh and tcsh users is set by sourcing the environment setup script.

In tcsh, csh:

source /cvmfs/cms.cern.ch/cmsset_default.csh

In bash, sh:

source /cvmfs/cms.cern.ch/cmsset_default.sh

This will set general CMS software environment variables, extend the user's $PATH to include CMS specific utilities and tools, and define aliases used in the CMS software projects.


Note:, if you wish to change your default login shell, follow the instructions later on this page to open a ServiceNow Request.

Platform specific environment

SCRAM automatically sets the correct architecture based on the version of Scientific Linux installed on a node. The default (which may change) is slc7_amd64_gcc820. You only need to redefine the architecture if you want to change the default. For a more detailed example, see the CMS WorkBook. For example, to access newer releases do the following:

In tcsh:

setenv SCRAM_ARCH slc7_amd64_gcc820

Or in bash:

export SCRAM_ARCH=slc7_amd64_gcc820

Note that a CMSSW work area that is setup in a slc6 environment will NOT work on slc7 nodes, such as cmslpc-sl7.fnal.gov. You must make a new CMSSW work area with an appropriate slc7 SCRAM_ARCH to have the code run on slc7, you can then copy over your code and recompile with scram b.

Build work area and set runtime environment

To build your work area: the instructions are described in the CMSWorkBook twiki. Keep in mind that you will probably want to work in your ~/nobackup/ area as it has more space available than your home directory.

For example: cmsrel CMSSW_10_6_14 creates a working area which you only need to do once per CMSSW version. You will need to be within the CMSSW_10_6_14/src directory to build the runtime environment.

A runtime environment for applications is generated by the scramv1 runtime -[c]sh command, executed in the area associated with a particular version of product. It may be the development area created by a user, or a public release area. The output of that command differs between versions, and the runtime environment itself is not set automatically. To set it, the user need to evaluate the result of the command above using the aliased cmsenv command, which is available only after sourcing /cvmfs/cms.cern.ch/cmsset_default.[c]sh.

slc6, slc7, and "non-production version"

Note that a CMSSW work area that is setup in a slc6 environment will NOT work on slc7 nodes, such as cmslpc-sl7.fnal.gov. You must make a new CMSSW work area with an appropriate slc7 SCRAM_ARCH to have the code run on slc7, you can then copy over your code and recompile with scram b.

If you get the following warning (example), you may safely ignore it. It means that the particular CMSSW version is available on slc7, but the production architecture" (used to produce central MC samples, ultra-legacy, etc.) is slc6. You should continue to use the slc7 version and ignore the warning.

WARNING: Release CMSSW_9_4_15 is not available for architecture slc7_amd64_gcc820.
         Developer's area is created for available architecture slc7_amd64_gcc630.
WARNING: Developer's area is created for non-production architecture slc7_amd64_gcc630. Production architecture for this release is slc6_amd64_gcc630.

GitHub

CMSSW and version control is handled by GitHub. This link describes how to setup and access the CMSSW github.

Additionally, there is the Github HATS at the LPC to learn more about how to use GitHub with CMSSW.

Login shell

The default login shell for new users at the cmslpc cluster is bash (June 4, 2019), it was tcsh until then.

To find out what your current shell is, use the command:

[username@cmslpc117 ~]$ echo $0
-tcsh

  • To permanently change your default login shell, use the LPC Service Portal
    • Login with Fermilab single sign-on (to use Kerberos, be sure to do a one-time configuration of your browser per the directions on the sign-on page; otherwise use your Services username/password (different from Kerberos))
    • Choose the " Modify default shell on CMS LPC nodes" ticket and fill it out
    • The ticket will process and close automatically usually within a minute
      • Note:It may take up to 1 business day (FNAL hours) for this change to propagate to all nodes
  • Note that changing one's shell manually after you login may not process default EOS aliases unless you use the -l option, like bash -l
  • As of Nov. 4, 2019, any new users get a default ~/.bash_profile file with the contents:
  • 
    # Source global definitions
    if [ -f /etc/bashrc ]; then
        . /etc/bashrc
    fi
    
    
  • Users with accounts made before Nov. 4, 2019 do not have a ~/.bash_profile by default)
  • Consider that modifying ~/.bash_profile is best for most uses, for instance, ~/.bashrc is used with scp. For more on that, there are many web discussions.

Singularity

Singularity is available in unprivileged mode for both the cmslpc interactive nodes as well as on the condor batch worker nodes. Users can run a slc6 image on the slc7 interactive nodes, for instance: cmssw-cc6. Documentation about CMSSW singularity can be found at this link: http://cms-sw.github.io/singularity.html
Here is an example how to use the command to mount directories you may need. Note we do NOT mount the eos fuse mount directory on purpose.

cmssw-cc6  --bind `readlink $HOME` --bind `readlink -f ${HOME}/nobackup/` --bind /cvmfs

Here is a link to the software and documentation for the latest Docker/Singularity HATS@LPC

LCG software and standalone ROOT

There is a lot of additional software packages available through the LHC Computing Grid (LCG). For more information on what is available, consult: LCG software information, and you can find a list of packages here: lcginfo.cern.ch. To setup the software from cvmfs, do the following (tcsh) - a specific example version is given - please pick your own preferred example LCG_**
Note /cvmfs/sft.cern.ch/lcg/mapfile.txt contains the list of all the possible software/directories: bash:

[username@cmslpc132 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_103swan/x86_64-centos7-gcc11-opt/setup.sh

tcsh:

[username@cmslpc132 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_103swan/x86_64-centos7-gcc11-opt/setup.csh

Note that this may set different versions of python and root than your typical CMSSW, so you may wish to NOT use this command in the same shell as a CMSSW software environment. You can also configure a standalone ROOT (not in CMSSW!) - be sure your gcc compiler is a similar version (note the LCG environment above will pickup a standalone ROOT). Note that this will NOT work with python as you want the complete LCG setup in which the appropriate ROOT and python versions are both in your environment. This technique works for ROOT only. Instructions are on the specific ROOT release page. bash:

[username@cmslpc132 ~]$  . /cvmfs/sft.cern.ch/lcg/app/releases/ROOT/6.08.04/x86_64-centos7-gcc48-opt/root/bin/thisroot.sh

tcsh:

[username@cmslpc132 ~]$  . /cvmfs/sft.cern.ch/lcg/app/releases/ROOT/6.08.04/x86_64-centos7-gcc48-opt/root/bin/thisroot.csh


For the most recent version of ROOT, browse here:
ls /cvmfs/sft.cern.ch/lcg/releases/LCG_latest/ROOT

Python3

  • Python3 is installed by default on all the cmslpc interactive nodes, to find the version, type python3 --version.
  • CMSSW code may possibly be associated with a different version. CMSSW_10_1_0 and above come with python (2) and python3. To find out which python you are using use: which python
  • You can run it with:

    [username@cmslpc137 ~]$ python3
    

FNAL LPC CVMFS area

Fermilab LPC has its own cvmfs area: /cvmfs/cms-lpc.opensciencegrid.org
  • Useful scripts synchronized from the FNALLPC github lpc-scripts area are in /cvmfs/cms-lpc.opensciencegrid.org/FNALLPC/lpc-scripts
  • LPC Collaborative group software can be found in /cvmfs/cms-lpc.opensciencegrid.org/group

References

For further references about using scram commands please consult the CMS WorkBook SetComputerNode. For questions concerning this page and CMS software environment at FNAL consult the LPC Computing Get Help list of resources.

Webmaster | Last modified: Wednesday, 04-Oct-2023 11:05:22 CDT