Computing Environment Setup : Software Setup
General CMS Software Environment
The CMS software environment for csh and tcsh users is set by sourcing the environment setup script.
In bash, sh:
source /cvmfs/cms.cern.ch/cmsset_default.sh
In tcsh, csh:
source /cvmfs/cms.cern.ch/cmsset_default.csh
This will set general CMS software environment variables, extend the user's $PATH to include CMS specific utilities and tools, and define aliases used in the CMS software projects.
Note:, if you wish to change your default login shell, follow the instructions later on this page to open a ServiceNow Request.
Platform specific environment
SCRAM automatically sets the correct architecture based on the version of
Scientific Linux installed on a node. The default (which may change) is el9_amd64_gcc12
.
You only need to redefine the architecture if you want to change the default.
For a more detailed example,
see the CMS WorkBook.
For example, to access newer releases do the following:
In bash:
export SCRAM_ARCH=el9_amd64_gcc12
Or in tcsh:
setenv SCRAM_ARCH el9_amd64_gcc12
Note that a CMSSW work area that is setup in a
slc7
environment will NOT work on Alma9
nodes, such as cmslpc-el9.fnal.gov
. You must make a new CMSSW work area with an appropriate slc7 SCRAM_ARCH
to have the code run on slc7
, you can then copy over your code and recompile with scram b
. Alternately, slc7
software can run on a an slc7 apptainer container.
Build work area and set runtime environment
To build your work area: the instructions are described in the CMSWorkBook twiki. Keep in mind that you will probably want to work in your ~/nobackup/
area as it has more space available than your home directory.
For example: cmsrel CMSSW_13_3_1_patch1
creates a working area which you only need to do once per CMSSW version. You will need to be within the CMSSW_13_3_1_patch1/src
directory to build the runtime environment.
A runtime environment for applications is generated by the scramv1 runtime -[c]sh
command, executed in the area associated with a particular version of product. It may be the development area created by a user, or a public release area. The output of that command differs between versions, and the runtime environment itself is not set automatically. To set it, the user need to evaluate the result of the command above using the aliased cmsenv
command, which is available only after sourcing /cvmfs/cms.cern.ch/cmsset_default.[c]sh
.
slc7, el8, el9, and "non-production version"
Note that a CMSSW work area that is setup in a slc7
environment will NOT work on el9
nodes, such as cmslpc-el9.fnal.gov
. You must make a new CMSSW work area with an appropriate el9 SCRAM_ARCH
to have the code run on slc7
, you can then copy over your code and recompile with scram b
.
If you get the following warning (example), you may safely ignore it. It means that the particular CMSSW version is available on el9, but the production architecture" (used to produce central MC samples, ultra-legacy, etc.) is el8. You should continue to use the el8 version and ignore the warning.
WARNING: Developer's area is created for non-production architecture el9_amd64_gcc12.
Production architecture for this release is el8_amd64_gcc12
Note that this error example means that you have set a SCRAM_ARCH
to a different operating system than the machine you are logged into. You should ensure that the SCRAM_ARCH
and the CMSSW area are both created with the appropriate operating system. If you require a different operating system, be sure to use an apptainer container to run on a different operating system than the one you logged into.
WARNING: You are trying to use SCRAM architecture 'slc7' on host with operating system 'el9'.
GitHub
CMSSW and version control is handled by GitHub. This link describes how to setup and access the CMSSW github.Additionally, there is the Github HATS at the LPC to learn more about how to use GitHub with CMSSW.
Login shell
The default login shell for new users at the cmslpc cluster isbash
(June 4, 2019), it was tcsh
until then. To find out what your current shell is, use the command:
[username@cmslpc333 ~]$ echo $0 -tcsh
- To permanently change your default login shell, use the LPC Service Portal
- Login with Fermilab single sign-on (to use Kerberos, be sure to do a one-time configuration of your browser per the directions on the sign-on page; otherwise use your Services username/password (different from Kerberos))
- Choose the " Modify default shell on CMS LPC nodes" ticket and fill it out
- The ticket will process and close automatically usually within a minute
- Note:It may take up to 1 business day (FNAL hours) for this change to propagate to all nodes
- Note that changing one's shell manually after you login may not process default EOS aliases unless you use the
-l
option, likebash -l
- As of Nov. 4, 2019, any new users get a default
~/.bash_profile
file with the contents:
# Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi
~/.bash_profile
by default)bash-4.2$
, you may need to put that above in your .bash_profile
, and sometimes also your .bashrc
file. Once fixed it should look like [username@cmslpc333 ~]$
upon new connections.~/.bash_profile
is best for most uses, for instance, ~/.bashrc
is used with scp
. For more on that, there are many web discussions.
Apptainer (Singularity)
Singularity is available in unprivileged mode for both the cmslpc interactive nodes as well as on the condor batch worker nodes. Users can run a slc7 image on the el9 interactive nodes, for instance:cmssw-el7
. Documentation about CMSSW singularity/apptainer can be found at this link: http://cms-sw.github.io/singularity.html
Here is an example how to use the command to mount directories you may need. Note we do NOT mount the eos fuse mount directory on purpose. This command also chooses the bash shell and acts as if you logged in (loading the
~/.bash_profile
)
cmssw-el7 -p --bind `readlink $HOME` --bind `readlink -f ${HOME}/nobackup/` --bind /uscms_data --bind /cvmfs -- /bin/bash -l
Here is a link to the software and documentation for the latest Docker/Singularity HATS@LPC
LCG software and standalone ROOT
There is a lot of additional software packages available through the LHC Computing Grid (LCG). For more information on what is available, consult: LCG software information, and you can find a list of packages here: lcginfo.cern.ch. To setup the software from cvmfs, do the following (tcsh) - a specific example version is given - please pick your own preferred exampleLCG_**
Note
/cvmfs/sft.cern.ch/lcg/mapfile.txt
contains the list of all the possible software/directories:
bash:
[username@cmslpc333 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_105swan/x86_64-el9-gcc13-opt/setup.shtcsh:
[username@cmslpc333 ~]$ source /cvmfs/sft.cern.ch/lcg/views/LCG_105swan/x86_64-el9-gcc13-opt/setup.cshNote that this may set different versions of python and root than your typical CMSSW, so you may wish to NOT use this command in the same shell as a CMSSW software environment. You can also configure a standalone ROOT (not in CMSSW!) - be sure your gcc compiler is a similar version (note the LCG environment above will pickup a standalone ROOT). Note that this will NOT work with python as you want the complete LCG setup in which the appropriate ROOT and python versions are both in your environment. This technique works for ROOT only. Instructions are on the specific ROOT release page. bash:
[username@cmslpc333 ~]$ . source /cvmfs/sft.cern.ch/lcg/app/releases/ROOT/6.30.06/x86_64-almalinux9.3-gcc114-opt/bin/thisroot.shtcsh:
[username@cmslpc333 ~]$ . source /cvmfs/sft.cern.ch/lcg/app/releases/ROOT/6.30.06/x86_64-almalinux9.3-gcc114-opt/bin/thisroot.csh
For the most recent version of ROOT, browse here:
ls /cvmfs/sft.cern.ch/lcg/releases/LCG_latest/ROOT
Python3
- Python3 is installed by default on all the cmslpc interactive nodes, to find the version, type
python3 --version
. - CMSSW code may possibly be associated with a different version.
CMSSW_10_1_0
and above come with python (2) and python3. To find out which python you are using use:which python
You can run it with:
[username@cmslpc333 ~]$ python3
FNAL LPC CVMFS area
Fermilab LPC has its own cvmfs area:/cvmfs/cms-lpc.opensciencegrid.org
- Useful scripts synchronized from the FNALLPC github lpc-scripts area are in
/cvmfs/cms-lpc.opensciencegrid.org/FNALLPC/lpc-scripts
- LPC Collaborative group software can be found in
/cvmfs/cms-lpc.opensciencegrid.org/group
References
For further references about using scram commands please consult the CMS WorkBook SetComputerNode. For questions concerning this page and CMS software environment at FNAL consult the LPC Computing Get Help list of resources.