uscms.org  www 

User Software and Computing

Computing Environment Setup : Software Setup

General CMS Software Environment

The CMS software environment for csh and tcsh users is set by sourcing the environment setup script.

In tcsh, csh:

source /cvmfs/cms.cern.ch/cmsset_default.csh

In bash, sh:

source /cvmfs/cms.cern.ch/cmsset_default.sh

This will set general CMS software environment variables, extend the user's $PATH to include CMS specific utilities and tools, and define aliases used in the CMS software projects.

Note:, if you wish to change your default login shell, follow the instructions later on this page to open a ServiceNow Request.

Platform specific environment

SCRAM automatically sets the correct architecture based on the version of Scientific Linux installed on a node. The default (which may change) is slc6_amd64_gcc472. You only need to redefine the architecture if you want to change the default. For a more detailed example, see the CMS WorkBook. For example, to access newer releases do the following:

In tcsh:

setenv SCRAM_ARCH slc6_amd64_gcc630

Or in bash:

export SCRAM_ARCH=slc6_amd64_gcc630

Build work area and set runtime environment

To build your work area: the instructions are described in the CMSWorkBook twiki. Keep in mind that you will probably want to work in your ~/nobackup/ area as it has more space available than your home directory.

For example: cmsrel CMSSW_9_3_2 creates a working area which you only need to do once per CMSSW version. You will need to be within the CMSSW_9_3_2/src directory to build the runtime environment.

A runtime environment for applications is generated by the scramv1 runtime -[c]sh command, executed in the area associated with a particular version of product. It may be the development area created by a user, or a public release area. The output of that command differs between versions, and the runtime environment itself is not set automatically. To set it, the user need to evaluate the result of the command above using the aliased cmsenv command, which is available only after sourcing /cvmfs/cms.cern.ch/cmsset_default.[c]sh.

Show me how [β] (uses existing cmsrel of myFavoriteCMSSW instead of, for instance, CMSSW_9_3_2)


CMSSW and version control is handled by GitHub. This link describes how to setup and access the CMSSW github.

Additionally, there is the Github HATS at the LPC from 2017 to learn more about how to use GitHub with CMSSW.

Login shell

The default login shell at the cmslpc cluster is tcsh.

To find out what your current shell is, use the command:

[username@cmslpc37 ~]$ echo $0

To permanently change your default login shell, open a ServiceNow General request, being sure to:
  • Put in your experiment as E-892/919 (CMS)
  • Explain what shell you want changed to (bash for instance)
  • Specify that this is for the cmslpc cluster

Note that changing one's shell manually may not process default EOS aliases.

LCG software and standalone ROOT

There is a lot of additional software packages available through the LHC Computing Grid (LCG). For more information on what is available, consult: LCG list of packages for release 89, and PersistencyReleaseNotes#LCG_89 twiki. To setup the software from cvmfs, do the following (tcsh) - a specific example version is given - please pick your own preferred example LCG_**:

[username@cmslpc37 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_89/x86_64-slc6-gcc62-opt/setup.csh


[username@cmslpc37 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_89/x86_64-slc6-gcc62-opt/setup.sh

Note that this may set different versions of python and root than your typical CMSSW, so you may wish to not use this command in the same shell as a CMSSW software environment. You can then configure a standalone ROOT (not in CMSSW) - be sure your gcc compiler is a similar version (note the LCG environment above will pickup a standalone ROOT): tcsh:

[username@cmslpc37 ~]$  source  /cvmfs/sft.cern.ch/lcg/releases/ROOT/6.10.02-19565/x86_64-slc6-gcc62-opt/bin/thisroot.csh


[username@cmslpc37 ~]$  source  /cvmfs/sft.cern.ch/lcg/releases/ROOT/6.10.02-19565/x86_64-slc6-gcc62-opt/bin/thisroot.sh

You can find similar versions for different architectures, for instance on a SL7 node, use: x86_64-centos7-gcc7-opt


  • Most versions of CMSSW come with Python2. CMSSW_10_1_X_2018-03-08-1100 comes with python (2) and python3.
  • However, if you want to run python3, you can pick that up from cvmfs, by running python3.6 directly from this area, note that this doesn't change your PYTHONPATH so you may find that useful:

    [username@cmslpc37 ~]$ /cvmfs/sft.cern.ch/lcg/releases/Python/3.6.3-c2eb8/i686-slc6-gcc49-opt/bin/python3.6

  • Or pre-pend your path (tcsh example given):

    [username@cmslpc37 ~]$ setenv PATH /cvmfs/sft.cern.ch/lcg/releases/Python/3.6.3-c2eb8/i686-slc6-gcc49-opt/bin/python3.6:${PATH}

    [username@cmslpc37 ~]$ python3

  • Alternately you can get it from setting up the complete LCG Python3 environment, for example (tcsh):

    [username@cmslpc37 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_92python3/x86_64-slc6-gcc62-opt/setup.csh


    [username@cmslpc37 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_92python3/x86_64-slc6-gcc62-opt/setup.sh

    Then you can run

    [username@cmslpc37 ~]$ python3


Fermilab LPC has its own cvmfs area: /cvmfs/cms-lpc.opensciencegrid.org
  • Useful scripts synchronized from the FNALLPC github lpc-scripts area are in /cvmfs/cms-lpc.opensciencegrid.org/FNALLPC/lpc-scripts
  • LPC Collaborative group software can be found in /cvmfs/cms-lpc.opensciencegrid.org/group
  • GPU software (under development) can be found in /cvmfs/cms-lpc.opensciencegrid.org/sl7/gpu, with Setup.csh and Setup.sh to setup the working environment for tcsh and bash shells, respectively.


For further references about using scram commands please consult the CMS WorkBook SetComputerNode. For questions concerning this page and CMS software environment at FNAL consult the LPC Computing Get Help list of resources.

Webmaster | Last modified: Monday, 26-Mar-2018 10:40:54 CDT