NOTE: these are outdated, site moved to https://uscms-software-and-computing.github.io

No. Name Note
1 USCMS Activities  
1.1 Tier-1 Facilities at FNAL  
1.1.1 Tier-1 Development  
1.1.1.1 Extend/develop 2005 dCache and SRM functionality Continue development of the dCache and SRM infrastructure, especially for the service and data challenges.
1.1.1.2 Develop 2005 network infrastructure Evaluate and develop network plan that meets needs and has an adequate safety margin.
1.1.1.3 Evaluate/choose 2005 data disk and system integration Evaluate and select a data disk vendor that provides reliable and affordable data storage
1.1.1.4 Benchmark and choose 2005 worker node platform Benchmarking the 2005 Opteron nodes against the current Intel Nodes
1.1.1.5 Continue development of 2005 network monitoring tools Provide web based monitoring tools describing network performance and state, including plots showing IO aggregation to sites as well as detailed network flows grouped by protocol.
1.1.1.6 Develop 2006 network infrastructure Evaluate and develop network plan that meets needs and has an adequate safety margin.
1.1.1.7 Evaluate/choose 2006 data disk and system integration Evaluate and select a data disk vendor that provides reliable and affordable data storage
1.1.1.8 Improved monitoring of jobs running on facility Provide better plots of activities, including users and task information, for jobs running on the facilities.
1.1.1.9 Improve Tier-1 deployment of LCG and OSG interfaces Improvements to OSG and LCG gateways on the FNAL Tier1
1.1.1.10 Improved facility monitoring Better facility monitoring that does 3 things: (a) accurately and completely reports on a web page out-of-tolerance situations for a primary to correct, (b) takes programed palliative actions once a situation has been investigated, (c) pages us for situations that need immediate attention.
1.1.1.11 Verfication of robot technologies Decision to move forward on central CD STK 8500 silo
1.1.1.12 Protect facilities through improved automated shutdown Protect our assets - replicated and automated shutdown of node in any environmentally unsafe situation, and also proactive vigilant security measures.
1.1.1.13 Benchmark and choose 2006 worker node platform Select the best 2006 worker node platform that meets and is within budget
1.1.1.14 Extend/develop 2006 dCache/SRM data movement functionality Continue development of the dCache and SRM including better cost balancing in the dCache, improved reliabiliy and functionality in the replica manager, demonstrated scalability and implementation of all SRM v2.1 features.
1.1.1.15 Continue development of 2006 network monitoring tools Provide network tools allowing easy monitoring of aggregated GE links to the storage nodes.
1.1.1.16 Facility accounting Better accounting and monitoring of computing resources, especially via condor tools. Accurately report numbers and plots.
1.1.1.17 Decision of IPMI vs. Cyclades Decision of IPMI vs. Cyclades for controling and monitoring worker nodes
1.1.1.18 Improved reload of worker nodes to a base configuration Improve ROCKS deployment by extending central cache to a cache on each rack of workers thereby decreasing reload time. Add automated update of all packages, not just the ones flagged by security releases, to maintain a common and up-to-date OS.
1.1.1.19 Verfication of LTO drive technologies Verification that LTO drive deployment meets experiment needs
1.1.1.20 SRM V2 Development Work on SRM V2.2 Interface
1.1.1.21 Verify IBRIX as reliable choice for global user disk Make final decision on IBRIX as the vendor for the major portion of our ~20 TB user disk deployment. Keep other possibilities (already proven but much more expensive choices) open and at a low development level
1.1.1.22 Extend/develop 2007 dCache and SRM functionality Continue development of the dCache and SRM infrastructure
1.1.1.23 Develop 2007 network infrastructure Evaluate and develop network plan that meets needs and has an adequate safety margin.
1.1.1.24 Evaluate/choose 2007 data disk and system integration Evaluate and select a data disk vendor that provides reliable and affordable data storage
1.1.1.25 Benchmark and choose 2007 worker node platform Select the best 2007 worker node platform that meets and is within budget
1.1.1.26 Develop 2008 network infrastructure Evaluate and develop network plan that meets needs and has an adequate safety margin.
1.1.2 Tier-1 Deployment  
1.1.2.1 Deployment of functional global filesystem A global filesystem (IBRIX) that works as we expected. Also tests need to be done to see if a NFS RO-exported filesystem (running on a nexsan) that can be used for CMS software distribution will scale to 1500 nodes and what its performance is.
1.1.2.2 Network deployment for 2005 Deployment of CISCO 6509 Switch
1.1.2.3 CPU deployment for 2005 Deployment of 280 Opteron Nodes in 3 batches
1.1.2.4 Data disk deployment for 2005 Deployment 3 NEXSAN arrays
1.1.2.5 User disk deployment for 2005 Deployment 2 Infortrend arrays
1.1.2.6 Improve diagnosis and routing Improve diagnostic capability for routing user problems to correct person.
1.1.2.7 Network deployment for 2006 Deployment of CISCO 6509 Switch
1.1.2.8 Deployment of modern ssh Deployment of a modern SSH server, and any required gateway for cryptocard users
1.1.2.9 Pass site functional tests and GridCAT monitoring Improvements to OSG and LCG Gateways on the FNAL Tier1
1.1.2.12 CPU deployment for 2006 Deployment of 280 Opteron Nodes in batches
1.1.2.13 SRM V2 Deployment Work on SRM V2.2 Interface
1.1.2.14 Interface Deployment and Debugging Debugging and deployment of grid and facility interfaces. Scaling work
1.1.2.15 Network deployment for 2007 Deployment of CISCO 6509 Switch
1.1.2.16 Data disk deployment for 2007 Deployment 1.2PB of disk arrays
1.1.2.17 Data disk deployment for 2007 Deployment 8 TB of disk arrays
1.1.2.18 CPU deployment for 2007 Deployment of 280 Opteron Nodes in batches
1.1.3 Tier-1 Operations  
1.1.3.1 Ongoing 2005 security tasks  
1.1.3.2 Processing operations 2005  
1.1.3.3 Data disk operations 2005  
1.1.3.4 User disk operations 2005  
1.1.3.5 Tape operations 2005  
1.1.3.6 dCache operations 2005  
1.1.3.7 Ongoing 2005 software upgrades  
1.1.3.8 Tier-1 facility management  
1.1.3.9 User services coordinator 2005  
1.1.3.10 SC3 operations at the Tier-1  
1.1.3.11 SC3 recap and prep  
1.1.3.12 Ongoing 2006 software upgrades  
1.1.3.13 Improved documentation in 2006  
1.1.3.14 Data disk operations 2006  
1.1.3.15 User disk operations 2006  
1.1.3.16 Tape operations 2006  
1.1.3.17 dCache operations 2006  
1.1.3.18 User services coordinator 2006  
1.1.3.19 Processing operations 2006  
1.1.3.20 Tier-1 facility management  
1.1.3.21 Ongoing 2006 security tasks  
1.1.3.22 SRM V2 Operations Work on SRM V2.2 Interface
1.1.3.23 SC4 operations at the Tier-1  
1.1.3.24 Demonstrate ability to reload rack quickly Demonstrate the ability to reload a rack of workers within 20 minutes. Have all workers be identical, at least on a weekly time period.
1.1.3.25 CSA06 operations  
1.1.3.26 CSA06 recap and development  
1.1.3.27 Data Management Operations  
1.1.3.28 Ongoing 2007 security tasks  
1.1.3.29 Ongoing 2007 software upgrades  
1.1.3.30 Data disk operations 2007  
1.1.3.31 User disk operations 2007  
1.1.3.32 Tape operations 2007  
1.1.3.33 dCache operations 2007  
1.1.3.34 User services coordinator 2007  
1.1.3.35 Tier-1 facility management  
1.1.3.36 Processing operations 2007  
1.2 Tier-2 Program  
1.2.1 Tier-2 Development  
1.2.1.1 Architecture Development  
1.2.1.1.1 Initial Deployment of Resilient dCache at UCSD Installation of new resilient dCache core at UCSD
1.2.1.1.2 Resilient dCache Installation Day for Tier-2 Centers Roll out Resilient dCache on US-CMS Tier-2 Centers
1.2.1.1.3 Reconciliation of Configuration Systems For better Tier-2 configuration support, we will attempt to reconcile the Tier-2 configuration systems
1.2.1.1.4 Evaluation of fine-grained authentication at UCSD Evaluation of fine-grained authentication; needs a write-up.
1.2.1.1.5 Development of dCache behind firewall at UCSD Development of dCache behind firewall; needs a write-up.
1.2.1.1.6 Publish Facility Configuration and Architecture To offer Grid support through FNAL, it is important for everyone to understand the details of the site configuration. This will be done on the site Web pages.
1.2.1.1.7 Documentation of local architectures Provide document describing implementation of Tier-2 facility at local site, including interaction with existing infrastructure and IT organization.
1.2.1.2 Interface Development  
1.2.1.2.1 SRM Transfer Demonstrations at UCSD Successfully demonstrate SRM-SRM Transfers between FNAL and UCSD Tier-2 Center srmcp as both a source and a destination as well and third party replication should succeed
1.2.1.2.2 SRM Transfer Demonstrations at UFL Successfully demonstrate SRM-SRM Transfers between FNAL and UFL Tier-2 Center srmcp as both a source and a destination as well and third party replication should succeed
1.2.1.2.3 SRM Transfer Demonstrations at Caltech Successfully demonstrate SRM-SRM Transfers between FNAL and Caltech Tier-2 Center srmcp as both a source and a destination as well and third party replication should succeed
1.2.1.2.4 SRM Transfer Demonstrations at UW Successfully demonstrate SRM-SRM Transfers between FNAL and UW Tier-2 Center srmcp as both a source and a destination as well and third party replication should succeed
1.2.1.3 Hardware Development  
1.2.1.3.1 Evaluation of dual-core Opteron systems at UNL Evaluation of Opteron dual-core chips for Tier-2 use. Complete when writeup is available.
1.2.1.3.2 Evaluation of Irwindale systems at UNL Evaluation of Irwindales from Intel for Tier-2 use. Complete when writeup is available.
1.2.1.4 Environment Development  
1.2.1.4.1 Development of knowledge base at UCSD Development of knowledge base on the environment at UCSD
1.2.1.4.2 Validation jobs for Tier-2 facilities Development of a suite of validation jobs that can be submitted to Tier-2 facilities to test functionality from perspective of average users. No one yet assigned to develop this, but will probably try to leverage off SC3 work.
1.2.2 Tier-2 Deployment  
1.2.2.1 Facility Deployment  
1.2.2.1.1 2005 Deployment The US-CMS Tier-2 expectations in terms of facility scope in 2005 are approximately 20TB of disk based storage deployed as a dCache storage element and 60-80 dual CPU worker nodes. The goal in 2005 for US-CMS Facilities is 20% of the total (combination of complexity and capacity)
1.2.2.1.1.1 Operational Storage at UW Arrival, deployment, and commissioning of 50TB of SATA disk based storage for deployment as dCache Pools
1.2.2.1.1.2 Operational Processing Resources at UW Arrival, deployment, and commissioning of 200 dual Xeon systems
1.2.2.1.1.3 Operational Processing Resources at UCSD Arrival, deployment, and commissioning of 94+20 dual CPU systems
1.2.2.1.1.4 Operational Storage at UNL Arrival, deployment, and commissioning of 16TB (functional out of 19.2 TB) of SATA disk based storage for deployment as dCache Pools
1.2.2.1.1.5 Operational Processing Resources at UNL Arrival, deployment, and commissioning of 64 dual CPU dual core AMD Opteron systems
1.2.2.1.1.6 Operational Processing Resources at Purdue Arrival, deployment, and commissioning of 64+50 dual CPU Intel Xeon systems
1.2.2.1.1.7 Operational Storage at Purdue Arrival, deployment, and commissioning of 25TB of SATA disk based storage for deployment as dCache Pools
1.2.2.1.1.8 Operational Storage at Caltech Arrival, deployment, and commissioning of 40TB of SATA disk based storage for deployment as dCache Pools
1.2.2.1.1.9 Operational Processing Resources at UFL Arrival, deployment, and commissioning of 45+62 dual Opteron systems
1.2.2.1.1.10 Operational Processing Resources at Caltech Arrival, deployment, and commissioning of 40 dual Opteron CPU systems
1.2.2.1.1.11 Operational Processing Resources at MIT Arrival, deployment, and commissioning of 64 dual CPU systems
1.2.2.1.1.12 Operational Storage at UFL Arrival, deployment, and commissioning of 73 TB of SATA disk based storage for deployment as dCache Pools
1.2.2.1.1.13 Operational Storage at MIT Arrival, deployment, and commissioning of 20TB of MIT disk based storage for deployment as dCache Pools
1.2.2.1.1.14 Operational Storage at UCSD Arrival, deployment, and commissioning of 37.5 of SATA disk based storage for deployment as dCache Pools
1.2.2.1.2 2006 Deployment The US-CMS Tier-2 expectations in terms of facility scope in 2006 are approximately 40 additional TB of disk based storage deployed as a dCache storage element and 60-80 additional dual CPU worker nodes. The goal in 2006 for US-CMS Facilities is 50% of the total (combination of complexity and capacity). Total resources deployed should be about 500 kSI2000 of CPU and 60-100 TB of storage. The 2006 procurement schedule should be planned within the context of CSA06.
1.2.2.1.3 2007 Deployment The US-CMS Tier-2 expectations in terms of facility scope in 2007 are the resources required for start-up. This cooresponds to 200TB of storage space, 1000kSI2k of processing, and 2.5-10Gb/s of wide area networking. The goal in 2007 for US-CMS Facilities is 100% of the total (combination of complexity and capacity) The 2007 procurement schedule should be planned within the context of the running schedule.
1.2.2.2 Grid Deployment  
1.2.2.2.1 Integration Testbed Deployment for OSG Initial Deployment for OSG-0.1 Testbed
1.2.2.2.2 Initial Deployment of OSG on production clusters null
1.2.2.2.3 dCache Service Deployment for Analysis Use dCache deployed to host datasets for analysis
1.2.2.2.4 Deployment of OSG V0.2  
1.2.2.2.4.1 Deployment of OSG First Production Release on CE at Caltech Deployment of OSG V0.2 Computing Element at Caltech
1.2.2.2.4.2 Deployment of OSG First Production Release on CE at Purdue Deployment of OSG V0.2 Computing Element at Purdue
1.2.2.2.4.3 Deployment of OSG First Production Release on CE at UFL Deployment of OSG V0.2 Computing Element at UFL
1.2.2.2.4.4 Deployment of OSG First Production Release on CE at UW Deployment of OSG V0.2 Computing Element at UW
1.2.2.2.4.5 Deployment of OSG First Production Release on CE at UCSD Deployment of OSG V0.2 Computing Element at UCSD
1.2.2.2.4.6 Deployment of OSG First Production Release on CE at UNL Deployment of OSG V0.2 Computing Element at UNL
1.2.2.2.4.7 Deployment of OSG First Production Release on CE at MIT Deployment of OSG V0.2 Computing Element at MIT
1.2.2.2.5 Have system report to resource broker (BDII and GIP)  
1.2.2.2.6 Deployment of OSG V0.4 Deployment of OSG 0.4 Computing Elements and Storage Elements at all US-CMS Tier-2 sites
1.2.2.2.7 Storage Element Deployment All sites deploy a OSG storage element
1.2.2.2.8 Deployment of OSG V0.6  
1.2.2.2.9 dCache with SRM v2 functionality deployment This release will have SRM v2 functionality for the first time
1.2.2.2.10 Deployment of OSG V0.8  
1.2.2.3 CMS Services  
1.2.2.3.1 Storage Services This item describes the US-CMS Tier-2 expectations for storage serving, including dCache, SRM, and PhEDEx deployments.
1.2.2.3.1.1 Phedex Deployment for Analysis CMS Transfer Service Deployed at Tier-2 centers
1.2.2.3.1.2 Deployment of Improved Environment Packaging at Tier-2 Centers for analysis Deployment of simple PubDB for data discovery at Tier-2 centers
1.2.2.3.1.3 dCache Virtual Workshop for Tier-2s Workshop with FNAL experts on dCache, SRM, and PhEDEx Deployment
1.2.2.3.1.4 PubDB Deployment at Tier-2 Centers Deployment of simple PubDB for data discovery at Tier-2 centers
1.2.2.3.1.5 1 MB/s storage throughput test All sites demonstrate 1 MB/s/batch slot out of storage system
1.2.2.3.1.6 200 MB/s throughput test All sites demonstrate aggregate 200 MB/s from storage system to applications, either by achieving 2 MB/s/slot, or 1 MB/s for twice as many slots
1.2.2.3.1.7 dCache upgrade for SRM 2.2  
1.2.2.3.2 Software Environment Support This item describes the US-CMS Tier-2 expectations for supporting a consistent software environment on the Tier-2 sites.
1.2.2.3.2.1 Evaluation of Central Software installation system Demonstrate usage of US-CMS centralized installation techniques for software
1.2.2.3.2.2 Consistent software environment available for SC3 All Tier-2 centers participating in the SC3 service phase should have a consistent software environment.
1.2.2.3.2.3 Ready for use of DBS/DLS All sites ready for use of the DBS/DLS system
1.2.2.3.2.4 CMSSW installation All sites have production version of CMSSW installed
1.2.2.3.2.5 Ready for simulation with new infrastructure All sites ready for new simulation
1.2.2.3.3 User Access This item describes the US-CMS Tier-2 expectatations for enabling user access. This involves interactive, login, grid submission, or both.
1.2.2.3.3.1 Definition and publication of site policy regarding user access Documented policy on who can receive an interactive account and under what circumstances
1.2.2.3.3.2 Demonstration of grid access for user analysis Demonstrate the successful use of each tier-2 center for user analysis through the deployed OSG interface.
1.2.2.3.3.3 Deployment of DISUN Toolkit for a user analysis Deployment of DISUN developed user analysis components at Tier2s
1.2.2.3.3.4 Sustained use of 75% of processing resources through Grid interfaces Three days at 75% constant load at all sites
1.2.2.3.3.5 Establish user support system Each site identifies and deploys a person responsible for user support, including CRAB feedback, handling of GGUS tickets, means for users to contact sites
1.2.2.3.3.6 80% availability in SAM Sites maintain 80% availability as monitored by Site Availability Monitor for at least a one-week period
1.2.2.3.3.7 Sustained use of 75% of processing resources through Grid interfaces Three days at 75% constant load at all sites, now at 2007 scale
1.2.2.3.3.8 90% availability in SAM Sites maintain 90% availability as monitored by Site Availability Monitor for at least a one-week period
1.2.2.3.4 Data Hosting This item describes the US-CMS Tier-2 expectations for hosting experiment data for analysis
1.2.2.3.4.1 Each Tier-2 center has demonstrated hosting a single dataset Each Tier-2 center should host a single CMS official simulated dataset
1.2.2.3.4.2 Available datasets visible from PubDB (DLS) Hosted CMS event samples should be visible from a locally install PubDB instance.
1.2.2.3.4.3 Each Tier-2 center has demonstrated static or dynamic hosting of 10TB of data Maitain 10TB of hosted CMS data. This data may be static or refreshed in a dynamic way.
1.2.2.3.4.4 All sites host at least one production dataset Dataset can be chosen by each site; must be properly published
1.2.2.3.4.5 All sites host at least one new-EDM production dataset Dataset can be chosen by each site, must be properly published
1.2.2.3.4.6 All sites host datasets for CSA06  
1.2.2.3.4.7 All sites host at least one dataset from global data taking  
1.2.2.3.4.8 All sites host at least one dataset for CSA07  
1.2.2.3.4.9 All sites host at least one simulatd dataset for physics papers  
1.2.2.3.4.10 All sites host at least one dataset from MTCC3  
1.2.2.3.4.11 All sites host at least one dataset from pilot run  
1.2.2.4 Monitoring Services  
1.2.2.4.1 Report to CMS dashboard For now, properly running MonALISA reporting to the DISUN repository is all that is necessary
1.2.2.4.2 All sites reporting to Gratia  
1.2.2.5 Network Services  
1.2.2.5.1 Network Monitoring  
1.2.2.5.1.1 Deploy MonALISA modules for network monitoring This is the AwBv module, which uses Pathload
1.2.2.5.2 Capacity and Reliability Tests  
1.2.2.5.2.1 PhEDEx Load Tests  
1.2.2.5.2.2 CSA06 transfer rates All sites demonstrate 5 TB/day transfer rate from FNAL and 1 TB/day to FNAL. In adition, demonstrate use of 50% of available networking in hour-long bursts.
1.2.2.5.2.3 Continuous 50% bandwidth use at individual sites 3-day continuous use of 50% of baseline bandwidth at each site, one site at a time
1.2.2.5.2.4 Continuous 50% bandwidth use at all sites 3-day continuous use of 50% of baseline bandwidth at each site, at all sites simultaneously
1.2.2.5.2.5 3-day continuous use of 75% of baseline bandwidth  
1.2.2.5.2.6 7-day continuous use of 75% of baseline bandwidth  
1.2.2.5.2.7 Sites demonstrate 80% success rate of PhEDEx transfers  
1.2.2.5.2.8 Demonstrate 2008-scale download and upload rates 10 TB/day download, 2 TB/day upload, assuming bandwidth available
1.2.2.5.2.9 Sites demonstrate 90% success rate of PhEDEx transfers  
1.2.2.5.3 Topology Tests  
1.2.2.5.3.1 Demonstrate data transfer from all available non-FNAL Tier-1 sites  
1.2.2.5.3.2 Measure transfer rates from all non-FNAL Tier-1 sites  
1.2.2.5.4 Bandwidth Deployment  
1.2.2.5.4.1 10 Gb/s capacity All sites have 10 Gb/s connections to the Tier-1 site
1.2.2.6 Ready for SC3 Service Phase All sites running all hardware and services needed for SC3 service phase.
1.2.3 Tier-2 Operations  
1.2.3.1 UCSD Operations  
1.2.3.1.1 Participation in SC2 Service Challenge SC2 LCG Service challenge will consist of 500Mb/s from Tier-0 to Tier-1 centers. The aggregate Tier-2 transfer rate should be at least this much. All transfers are expected to run end-to-end from Tape to Tape in the case of T0 and T1, and Disk to disk in the case of T1 to T2 transfers.
1.2.3.1.2 Service Challenge 3  
1.2.3.1.3 2006 UCSD Operations Ongoing UCSD operations for 2006
1.2.3.1.4 Service Challenge 4  
1.2.3.1.5 Computing, Software and Analysis Challenge 2006  
1.2.3.1.6 2007 UCSD Operations Ongoing UCSD operations for 2007
1.2.3.2 Caltech Operations  
1.2.3.2.1 Participation in SC2 Service Challenge SC2 LCG Service challenge will consist of 500Mb/s from Tier-0 to Tier-1 centers. The aggregate Tier-2 transfer rate should be at least this much. All transfers are expected to run end-to-end from Tape to Tape in the case of T0 and T1, and Disk to disk in the case of T1 to T2 transfers.
1.2.3.2.2 Service Challenge 3  
1.2.3.2.3 2006 Caltech Operations Ongoing Caltech operations for 2006
1.2.3.2.4 Service Challenge 4  
1.2.3.2.5 Computing, Software and Analysis Challenge 2006  
1.2.3.2.6 2006 Caltech Operations Ongoing Caltech operations for 2007
1.2.3.3 UNL Operations  
1.2.3.3.1 Service Challenge 3  
1.2.3.3.2 2006 UNL Operations Ongoing UNL operations for 2006
1.2.3.3.3 Service Challenge 4  
1.2.3.3.4 Computing, Software and Analysis Challenge 2006  
1.2.3.3.5 2007 UNL Operations Ongoing UNL operations for 2007
1.2.3.4 UW Operations  
1.2.3.4.1 Service Challenge 3  
1.2.3.4.2 2006 UW Operations Ongoing UW operations for 2006
1.2.3.4.3 Service Challenge 4  
1.2.3.4.4 Computing, Software and Analysis Challenge 2006  
1.2.3.4.5 2007 UW Operations Ongoing UW operations for 2007
1.2.3.5 Purdue Operations  
1.2.3.5.1 Service Challenge 3  
1.2.3.5.2 2006 Purdue Operations Ongoing Purdue operations for 2006
1.2.3.5.3 Service Challenge 4  
1.2.3.5.4 Computing, Software and Analysis Challenge 2006  
1.2.3.5.5 2007 Purdue Operations Ongoing Purdue operations for 2007
1.2.3.6 UFL Operations  
1.2.3.6.1 Service Challenge 3  
1.2.3.6.2 2006 UFL Operations Ongoing UFL operations for 2006
1.2.3.6.3 Service Challenge 4  
1.2.3.6.4 Computing, Software and Analysis Challenge 2006  
1.2.3.6.5 2007 UFL Operations Ongoing UFL operations for 2007
1.2.3.7 MIT Operations  
1.2.3.7.1 2006 MIT Operations Ongoing MIT operations for 2006
1.2.3.7.2 Service Challenge 4  
1.2.3.7.3 Computing, Software and Analysis Challenge 2006  
1.2.3.7.4 2007 MIT Operations Ongoing MIT operations for 2007
1.2.4 Tier-2 Coordination Tier-2 management activities
1.2.4.1 Workshops  
1.2.4.1.1 Tier-2 Kick-off Workshop Workshop with all new and prototype Tier-2 computing centers at FNAL to establish time line and agree upon program of work for 2005
1.2.4.1.2 Workshop on Tier-2 Networking Issues Caltech workshop with network experts
1.2.4.1.3 Spring 2006 Tier-2 Workshop Annual Tier-2 workshop in Lincoln, NE
1.2.4.1.4 Spring 2007 Tier-2 Workshop Annual Tier-2 workshop in San Diego, CA
1.2.4.2 Web Pages Web pages for Tier-2 information
1.2.4.2.1 First version of Tier-2 Web Pages Post first version of Web pages, with as many links as possible
1.2.4.2.2 Web documentation of OSG policies, access policies Links to policies at the different sites. Complete 2005-09-12.
1.2.4.2.3 First version of status/monitor pages Status/monitor pages based on OSG tools, probably
1.2.4.3 Reports  
1.2.4.3.1 FY05 Q4 Report Quarterly report.
1.2.4.3.2 FY06 Q1 Report Quarterly report.
1.2.4.3.3 FY06 Q2 Report Quarterly report.
1.2.4.3.4 FY06 Q3 Report Quarterly report.
1.2.4.3.5 FY06 Q4 Report Quarterly report.
1.2.4.3.6 FY07 Q1 Report Quarterly report.
1.2.4.3.7 FY08 Q1 Report Quarterly report.
1.2.4.3.8 FY07 Q2 Report Quarterly report.
1.2.4.3.9 FY07 Q3 Report Quarterly report.
1.2.4.3.10 FY07 Q4 Report Quarterly report.
1.3 Grid Services and Interfaces The following tasks describe the grid services and interfaces for US CMS and interface with the Open Science Grid. Much of the development work is done off-project, for such tasks the work needed is integration, deployment and use. - US CMS will make resources accessible to the Open Science Grid, which provides for the sharing of US CMS and other VO owned Grid resources subject to policy and agreements. - US CMS plans include making opportunistic and shared use of the Open Science Grid resources. - The Open Science Grid will be maintained as a production infrastructure after the initial deployment milestone. The OSG interfaces may support opportunistic use of US CMS resources by non-CMS VOs.
1.3.1 Interoperability with LCG Grid Services Maintain and test interoperability with the LCG Grid Services especially during the deployment of the EGEE-glite middleware and the evolution to GT4 and web services. This activity includes collaboration with OSG and LCG on the evaluation of new services, integration, deployment and interoperability tests at the Service and Application level.
1.3.2 Grid Services Development Development, Acquire and/or interface Grid Services.
1.3.2.1 Virtual Organization Services This task comprises the development and deployment of tools to manage Authentication, Authorization, Accounting, and multi-VO Registration of the pool of users and institutions. This is considered a component part of the DPE since it is likely that the VO management tools can be developed in a package of tools that interact with the overall common security infrastructure. These will evolve to integrate to the LCG and Fermilab user registration. - Work on VO Services is expected go through several phases as the number of users and services increases and the interfaces with other Grid infrastructures used by CMS are fully developed.
1.3.2.1.1 VO Privilege Project definition accepted

1.3.2.1.2 VO Privilege Phase 1 (execution callout) Delivered to OSG and US CMS Grid Authorization callout from pre-WS Gram GT2, development of Prima, and testing with GRAM.
1.3.2.1.3 VO Privilege Phase 2 (Storage callout) Readiness plan delivered to OSG Integration

Coding of gPlazma phase 1 and update of Prima to work with GT4.

GPlazma integrated into new dCache release. Prima/GT4 in test

1.3.2.1.4 VO Privilege Phase 3 Service level authorization, Policy Expression and implementation delivered to OSG Integration 1/1/07 Delayed start due to lack of effort and other priorities. Still needed.
1.3.2.2 Clarens Service Develop and support; 1/1/07 : Support no longer needed for Grid services
1.3.2.3 Information Publishing Infrastructure and Discovery Services For the short term this task includes upgrade, deployment and use of Glue Schema and information publishing infrastructure on the US CMS data grid and Open Science Grid. This includes publishing information and deploying the OSG Discovery services. We expect this infrastructure to evolve considerably over the next couple of years as services mature and ,as an example, are able to publish their availability and state dynamically. We plan to evaluate and possibly contribute to the solutions for the wider CMS grid infrastructure across OSG and the LCG.
1.3.2.3.1 Merge of Grid3 and LCG information providers on US CMS Tier-1/Tier-2s and select (US ATLAS?) OSG sites

1.3.2.4 Accounting Services Interface to Facility accounting and provide Grid accounting information service. We will need to provide information of the use of the data grid and Open Science Grid resources by the different US CMS users and analysis groups. This is needed to show that policies have been applied appropriately, and give quantitative information on the efficiency and utilization of the resources - both dedicated to CMS and shared with other communities. - Each facility - CMS Tier-1, Tier-2s etc - will have internal accounting systems. These will be interfaced to a common Grid accounting schema, infrastructure and presentation layer. In addition CMS will need to keep VO based information to measure and show the value provided by resources. - It is expected that these services will be provided in phases to meet the increasing scale in the number of users, applications and resources.
1.3.2.4.1 Grid Accounting Interfaces and Schema Definition Accepted

1.3.2.4.2 Grid Accounting Readiness plan for Delivery to OSG Integration

1.3.2.4.3 US CMS Grid Accounting integrated into OSG Release

1/1/07: Delayed due to effort shortfall - developer left the lab

1.3.2.5 Grid Operations Services Building services to support the operations of the US CMS data grid. This will include operational monitoring of the US CMS grid infrastructure (health checks, performance etc). The Grid operations services will interface to the US CMS user and facility support services. The services include services and interfaces for problem reporting, alarm generation etc. Publishing information, monitoring and accounting information. Acquiring diagnostic and troubleshooting tools. This task also includes developing user and administrator help guides and publishing operations information to the web.
1.3.2.6 Edge Service Framework

Develop an edge services framework based on virtual machine technology, XEN, that will allow dynamic deployment of VO specific workspaces (implemented via virtual machines) as VO specific edge servers.

Initial development of GLobus Workspace environment within XEN. This gave us good experience with XEN which is now being used to good effect on the Tier-1 Facility and test beds. It gave practical experience with what issues are needed for production use especially in security, I/O performance. Support for this effort has migrated to the CS community itself -- e.g. SciDAC-2 CEDS Scalable Service. More work is needed to bring this to production, the community is moving forward with integrated support for VMs in Linux, and for the moment this is low priority for initial data taking.

1.3.2.7 Monitoring Services

This task tracks advances in monitoring that need to take place. It supports deployment of tools to monitor the health and status of infrastructure components; CPU, network usage, queue lengths, aliveness of servers, etc, on top of what is already provided in the facilities. In addition, this task supports development of tools to do new kinds of monitoring, such as configuration monitoring, configuration in the DPE. Finally, This task also addresses the work needed to monitor application parameters such as application state, number of events processed, etc. - It includes the upgrade, deployment and use of MonaLisa and interface to OSG monitoring services.

1/1/07 This task is ongoing with contributions from Grid Services, Tier-1 and Tier-2s.

1.3.2.7.1 MonaLisa Development MonaLisa Development
1.3.2.8 Security Services Security Services includes security processes and procedures for incident response and handling, identifying and fixing vulnerabilities in the systems and software, Accept deliverables from Privilege project. Deploy Authz at Fermilab, and possibly the Tier-2 sites. May delay deployment til call-outs from SRM are working.Accept security incident response and handling plan, registration and AUP from OSG for use of US CMS sites on OSG.
1.3.2.9 Distributed Batch Job Scheduling Services This task describes work on scheduling and execution of jobs including schedulers and systems to return job related metadata, migration, troubleshooting, and archiving. Also addresses research and development into execution planning. - Extension of existing job execution services to meet US CMS performance needs.
1.3.2.9.1 Remove Grid Head Node bottleneck

Evaluate Condor-C and alternative job execution models to remove Grid Head node bottlenecks.

1/1/07: Effort delayed due to other priorities

1.3.3 Grid Facility Integration Release and deployment of Grid Middleware services and integration of the US CMS data Grid are a collaboration between US CMS,Fermilab, and the joint Grid projects as part of the common infrastructure. Initial integrations were done on the CMS testbed, then on Grid3 in 2003 and as contributions to and in collaboration with the Open Science Grid Consortium deployments.
1.3.3.1 Grid3 Contributions to and participation in the Grid2003 Project, building and operation of the Grid3 common infrastructure.
1.3.3.2 OSG Release Spring 2005 Contributions to and participation in the readiness activities, the integration testbed, the provisioning and first release and deployment of Open Science Grid. As a major stakeholder in the OSG US CMS contributes to the testing, decision making, procedures, documentation, provisioning and operations of the Open Science Grid. Such tasks are leveraged with the equivalent services needed for the US CMS data grid, with participation of the Tier-1 and Tier-2 sites managed through the software and computing project. This release includes deployment of the new Authorization/Privilege components for access to Processing Resources, as well as a coordinated deployment of managed storage through the SRM interface.
1.3.3.3 OSG Release 0.4 Autumn 2005 Authorization/privilege management of access to Storage Resources will be added to the common infrastructure. Some extended data replica management services may be added. New monitoring and information publishing services are expected. The Condor-C job management system will be deployed in production, enabling a more efficient architecture for the use of grid resources. It is expected that this OSG release will support a demonstration of multi-VO use of the OSG to meet extended performance metrics and and interoperability demonstration with the LCG and EGEE for the SC2005 conference. This release will be used to support the CMS DC06 data challenge.
1.3.3.4 Interoperability Demonstration with OSG and LCG for SC2005 Conference

1.3.3.5 OSG Release 0.6 July 2006

We expect to add an extended Grid Accounting System for processing, storage and network resources to the OSG common grid infrastructure and US CMS data grid. This will be used to report and audit use of resources - both assured and opportunistically - by the VOs participating in the OSG Consortium. This release of OSG should see the inclusion of relevant services from the EGEE gLITE middleware. In particular, we expect to support the EGEE-gLIte dynamic account service and some aspects of the Dyanamic Workspace Management and Policy services from the Globus security teams.This release will be used to support the CMS 2007 production system.

1/1/07: OSG 0.4.1 released and 0.6 delayed to Spring 07 to enable increased functionality

1.3.3.6 OSG Release 2007

This release of OSG will focus on increasing the scale and performance of the infrastructure to meet the data taking needs for CMS in 2007 and 2008.

1/1/07: testing and provisioning in progress.

1.3.4 Grid Operations Operations and support activities cover the tasks needed to operate Grid services and the overall Grid infrastructure. Such support will start in 2005, will ramp up with the scale of the system and the number of users, and will be ongoing through the life of the program.
1.3.4.1 Virtual Organization Management Register users and input policy and role change requests. React to user problems - renewal of certificates, revocation, user reports and audits
1.3.4.2 Information Publishing serv_key=operations Check the information published by the US CMS Grid sites.Support Site administrators in publishing information. Provide updates to the information providers.
1.3.4.3 Job Management Monitor Grid jobs, analyse and identify problems and failures. Answer users questions and issues with grid job submission and execution.
1.3.4.4 Monitoring Services Operate the monitoring services. Analyse the monitoring information and provide reports and feedback.
1.3.4.5 Security Services

Respond to requests from the Grid and Fermilab security teams.

1/1/07: Effort transferred to OSG funded effort

1.3.4.6 Grid Support Ongoing support for CMS use and CMS sites on the Grid
1.3.4.7 Accounting Services Ensure the accounting services are active and performing. Make accounting reports, analyse the accounting information, compare to the policies and agreements. Resolve discrepancies and problems. 1/1/07: Add metrics to this project to analyse accounting reports and answer WLCG and OSG questions
1.3.5 Workload Management Evaluate, implement and support Workload Management Services
1.3.5.1 Evaluate workload management  
1.3.5.2 WMS Integration Integrate WMS solution with production tools
1.3.5.3 WMS Deployment and Support on OSG Deploy and evolve the WMS solution as needed for functionality scalability and robustness on OSG for US CMS.
1.4 Application Services  
1.4.1 Integration  
1.4.1.1 Workload and Data Management Integration  
1.4.2 Dataset Placement Development  
1.4.2.1 Data Placement Service  
1.4.2.2 Phedex Development  
1.4.2.2.1 Phedex V1 Agent Developments for FNAL Develop and deploy agents to support file import from CERN based on Phedex database schema version 1. This is a natural progression from the tools used in TMBD for DC04
1.4.2.2.2 Phedex V2 Agent Developments for FNAL Develop and deploy agents to support file import from CERN based on Phedex database schema version 2.
1.4.2.2.3 Develop Export Agents for FNAL Develop export agents for Phedex V2 that are used to make datasets created and published at FNAL available for export to Tier-2, CERN, and internation Tier-1 centers.
1.4.2.2.4 Develop Phedex agents for US Tier-2 Centers Develop agents for Tier-1 Tier-2 transfers in the US
1.4.3 Dataset Bookkeeping Development  
1.4.3.1 Dataset Bookking Prototype  
1.4.3.2 Development of DBS Design  
1.4.3.3 Development of APIs between Data Placement Service and Dataset Bookkeeping Service Development of Interfaces between dataset bookkeeper services and data placment services.
1.4.3.4 Publishing Services  
1.4.3.4.1 Dataset Webpage  
1.4.3.4.1.1 PubDB-Based Dataset Web Page Datasets published and validated with entries placed onto a web page. The web page contains a fragment that allows an analysis job to be run.
1.4.3.4.2 Frontier Client Development Work on Coral client
1.4.3.4.3 Calibration DB Development Work on calibration database
1.4.3.4.4 Calibration DB Development Work on calibration database
1.4.3.5 DBS testing Program  
1.4.3.6 DBS2 Development Program  
1.4.3.7 DBS Discovery Development  
1.4.4 Operations  
1.4.4.1 PubDB Operations  
1.4.4.1.1 PubDB Operations in 2004 Operations for Pubdb at FNAL in 2004
1.4.4.1.2 PubDB Operations in 2005 Operations for Pubdb at FNAL in 2005
1.4.4.1.3 Frontier Deployment  
1.4.4.2 Phedex Operations  
1.4.4.2.1 Phedex Tier-1 Import and Export Operations for 2005 Operations of import and export functionality for FNAL in 2005
1.4.4.2.2 Phedex Support for US Tier-2 Centers Operations of Phedex to Support Tier-2 centers in 2005
1.4.4.2.3 Phedex Operations Operations support for PhEDEx
1.4.4.3 CRAB Support  
1.4.4.3.1 Start of CRAB Support for FNAL Successful demonstation of CRAB submitted analysis application to the LCG enabled resources at the FNAL Tier-1 center
1.4.4.3.2 CRAB Support and Operations in 2005

Successful demonstation of CRAB submitted analysis application to the LCG enabled resources at the FNAL Tier-1 center

1.4.4.4 Dataset Bookkeeping Service  
1.4.4.4.1 DBS2 Operations  
1.4.4.4.2 DBS Discovery Operations  
1.5 Distributed Computing Tools The distributed computing model being developed for the LHC by the LCG and Grid projects is ambitious. It involves making efficient use of computing resources that are globally distributed. It requires the development of smart resource brokers, advanced job schedulers and efficient data movers. The expectation is that the manpower involved in the development of grid infrastructure will come from the LCG and grid projects. The CMS core software responsibilities involve the development of CMS software, which runs in a grid environment. Whether the grid model is a set of services called by CMS software or a high level environment that calls CMS applications, there are interfaces and functionality that needs to be developed in the CMS core software. The first step is to establish the mutual expectations for functionality and interfaces between CMS core software developers and grid developers. The planning of the implementation is critical to avoid adding a feature that breaks or limits the already rich feature set of CMS software. The final step is the implementation of Grid interfaces and grid functionality.
1.5.1 Core CMS Distributed Production Integration This project describes the effort US-CMS spends on integration of the CMS specific production elements into a corherent system. This involves the integration of RunJob, RefDB, PubDB, BOSS and the other components described in the OCTOPUS production package.
1.5.2 Facility and Grid Integration This project describes the effort US-CMS spends on integration of production components with Grid components and various dedicated and opporunistically available facilities. This is Distributed Production Environment Integration, Release and Testing and well as integration of grid interface components.
1.5.3 WorkLoad Management Tools necessary for physicists to do data analysis on WLCG.
1.5.3.1 User Desktop Client interfaces at the user desktop
1.5.3.2 Monitoring  
1.5.3.3 Integration  
1.5.3.4 Execution Environment Tools that define the runtime environment on the on WLCG.
1.5.3.4.1 Shreek Development Development of a runtime environment that allows chaining of jobs, has hooks for monitoring, and interfaces to application services.Runtime environment is generic enough to be used for workload management and Monte Carlo Production
1.5.3.4.2 CMSsoftDB XML DB to store site specific information CMS tools need to define the runtime environment after they arrive on the worker node. This application area at the grid sites. Completed and in production on OSG sites.
1.5.3.4.3 CMS application software installation tools Tools to maintain the CMS software application installations on grid sites via a centrally managed set of services. Completed and in production on OSG sites.
1.5.3.4.4 Automatized Validation of CMS ORCA application installations Tools to validate the application software installations atomatically.
1.5.3.4.5 Automatized Validation of CMS cmssw application installations Tools to validate the application software installations atomatically.
1.5.4 Operations This WBS item describes the US-CMS contribution to managing, supporting, and performing CMS simulated event production, as well as the preparation and performance of CMS data challenges.
1.5.4.1 Simulation Operations  
1.5.4.2 Simulation Operations CERN US-CMS Devotes some effort to production operations and management at CERN
1.5.4.3 Simulation Operations US Operations of the US-CMS Distributed Production environment. This involves submitting, tracking and collecting events run with MOP on the open science grid. As of 2004 it also involves creating published datasets for user analysis at FNAL locally and through grid interfaces.
1.5.4.3.1 Simulation Operations in the US 2006 Simulation Operations including CSA06
1.5.4.3.2 Simulation Operations in the US 2007 Simulation Operations in 2007
1.5.5 Simulation Production Tools CMS has need of tools for production job specification and submission over local and distributed resources The 2004 data challenges require the production of approximately fifty million simulated events. Large simulation samples were produced for the DAQ TDR and will be needed for the physics TDR as well. In order to produce the required large datasets, CMS has used globally distributed clusters of computing resource. Common production tools are used to ensure consistency between events produced at different regional centers. The production tools also automate the process to reduce the effort required at each regional center.
1.5.5.1 Monte Carlo Production Service  
1.5.5.2 Development of Prod_Agent  
1.5.5.3 Development of Prod_Agent  
1.5.6 DISUN Tier2C Program Data Intensive Science University Network
1.5.6.1 Start of DISUN Tier-2C program Initial conception and prototype of DISUN Physics Analysis Architecture integration and deployment of Condor-C based job submission interfaces at DISUN sites integrate and deploy a first implementation of the application monitoring services test and integrate the first round of CMS data management services
1.5.6.2 Analyzing DC06 data with the first functional Physics Analysis Architecture Development program for DISUN Physics Analysis Architecture first functionality milestone: automatic parallelization of workloads track completion of workloads initial multi-user interactive ROOT ntuple analysis environment based on PROOF deployed integration of GLOW resources at Madison into DISUN
1.5.6.3 DISUN baseline service, getting ready for CMS Data Taking second functionality milestone for DISUN Physics Analysis Architecture: develop and integrate functionality to manage workloads split across 1000 jobs integration and deployment for pull model infrastructure based on Condor-C more advanced scheduling allowing quick turn-around for high-priority jobs integration of technologies developed by Wisconsin
1.5.6.4 DISUN ramp-up for CMS physics discovery phase reliability milestone for DISUN Physics Analysis Architecture focus on stability, reliability, and scalability of DISUN Physics Analysis Architecture reliably support workload parallelization for a single user on the order of 1000 partial workloads improved integration of PROOF-based analysis include other Tier-2 centers, through integration with OSG
1.6 Software and Support Task for the development and support of CMS software
1.6.1 Support of Existing CMS code for use in USCMS  
1.6.1.1 Software Maintenance  
1.6.1.2 Software Maintenance  
1.6.1.3 Software Testing  
1.6.2 Base release coordination and support New releases of CMS software projects need to be tracked, and made available locally.
1.6.2.1 Software Release and Support 2005  
1.6.2.2 Software Release and Support 2006  
1.6.3 Debugger and other performance tools These tools must be made available for all US collaborators.
1.6.3.1 Deployment of Valgrid  
1.6.3.2 Maintenance of Total View 2005  
1.6.3.3 Maintenance of Total View 2005  
1.6.4 Re-engineering of CMS Core Software In the fall of 2004 a workshop was held at FNAL involving members of the CCS department and USCMS. These meetings resulted in a proposal to redesign the CMS framework and EDM. The design document contains a plan of work, from which the following subtasks have been taken. Note that all of this work is now pending final approval.
1.6.4.1 Framework Evaluation and Recommendation CMS Commissioned Study on Framework Work
1.6.4.2 Assessment of work required on pool and root Some of the desired features of the redesigned EDM will require additional features in dependent products such as root and pool. We believe these changes are small but a real assessment should be made to insure this is true, and negotiate these changes with the product providers.
1.6.4.3 Work required on pool and root During the above assesment the features needed that were easy to provide were provided and integrated into the new pool POOL_2_0_0. There remains investigations of this pool's performance. We have concluded that root branch aliasing will be needed. An implementation will be provided and integrated into the root product. This project is an extension of the US-CMS work done at CERN on initial Pool development
1.6.4.3.1 Initial Deployment of Pool and Root Interfaces  
1.6.4.3.2 Continuing Maintenance of Pool and Root Interfaces 2005  
1.6.4.3.3 Continuing Maintenance of Pool and Root Interfaces 2006  
1.6.4.4 Example classes at the Module, and Object Layers This is a demonstration project that will use simplified ORCA classes to show how to integrate the above developments into the existing CMS code base.
1.6.4.5 EDM classes Write the code for the redesigned EDM classes. The designs of this element and the next are mature, so it is believed these can be written in 1 mo.
1.6.4.5.1 EDM Classes Initial Development  
1.6.4.5.2 Initial Release  
1.6.4.5.3 EDM Class Development MTCC Release  
1.6.4.5.4 EDM Class Development MTCC3 Release  
1.6.4.5.5 Online Serializer  
1.6.4.5.6 MTCC Release Initial Release Capable for MTCC
1.6.4.6 Parameter Set System This system will be modeled on the existing rcp package used by CDF, D0, and Miniboone. The lessons learned from this first package are well understood so it is claimed that the reworking of this package in order to fulfill the design requirements of CMS should take 1 month. BTev has expressed interest in making this a joint project.
1.6.4.6.1 Parameter Set System Initial  
1.6.4.6.2 Initial Release  
1.6.4.6.3 Parameter Set System Development  
1.6.4.6.4 Production System Release  
1.6.4.6.5 Parameter M&O  
1.6.4.6.6 System Evaluation  
1.6.4.6.7 Parameter Develop python API  
1.6.4.7 Event Setup (non-event data)  
1.6.4.7.1 Initial Event Setup Development  
1.6.4.7.2 Initial Event Setup Development  
1.6.4.7.3 Event Setup MTCC Release  
1.6.4.7.4 Initial Release of Event Setup  
1.6.4.7.5 MTCC Release Initial Release Capable for MTCC
1.6.4.7.6 Event Setup M&O  
1.6.4.8 Framework and IO modules Write the code for the framework and IO modules.
1.6.4.8.1 FrameWork Classes  
1.6.4.8.2 Input Output Modules  
1.6.4.8.3 Initial Release  
1.6.4.8.4 FrameWork Class Development MTCC Release  
1.6.4.8.5 Online System Framework  
1.6.4.8.6 MTCC Release Initial Release Capable for MTCC
1.6.4.8.7 FrameWork Classes  
1.6.4.8.8 FrameWork Classes  
1.6.4.8.9 Production FrameWork Classes CSA06  
1.6.4.8.10 FrameWork Classes M&O  
1.6.4.9 EDM Support Utilities  
1.6.4.9.1 Data Integrity Checker  
1.6.4.9.2 Initial Release  
1.6.4.9.3 Refactoring Development  
1.6.4.10 Message Logger  
1.6.4.10.1 Initial Development  
1.6.4.10.2 Initial Release  
1.6.4.10.3 Message Logger M&O  
1.6.5 Local and remote CVS support CVS access must be maintained for all US collaborators in both the FNAL and CMS repositories.
1.6.5.1 Local and Remote CVS support for 2005  
1.6.5.2 Local and Remote CVS support for 2005  
1.6.5.3 Local and Remote CVS support for 2006  
1.6.6 Support LPC development environment Given the existence of the LPC, we anticipate a large increase in the number of people contributing to CMS software from the US. This activity will require support beyond what has previously been available.
1.6.6.1 Support LPC Development Environment 2005  
1.6.6.2 Support LPC Development Environment 2006  
1.6.7 Evaluation of SCRAM V1 SCRAM was rewritten to be more supportable in the long term. Additional functionality was not the highest priority, so SCRAM V1 needs to be evaluated to determine what features are needed by USCMS that are still missing.
1.6.8 Review/fix of the run time environment at USCMS sites There are known interferences between software distributions as they come from CERN and necessary local configurations of products such as dcap. A long term solution to these problems must be found and implemented.
1.6.9 Engineering support for LPC groups Provide C++ consulting and related computing professional support as requested by the LPC subgroup leaders.
1.6.9.1 Consulting help for the Simulation Group Help LPC physicist develop a framework for an automated tests of new OSCAR releases.
1.6.9.2 Consulting help for the Jet/Met rewrite Help LPC physicist convert needs and requirements into class designs. Provide technical help in determining how the current CMS reconstruction software can be used as input for the new ideas.
1.6.9.3 Consulting help for the Simulation Group Help LPC physicist develop a framework for an automated tests of new OSCAR releases.
1.6.9.4 Consulting help for the Tracker Group Help LPC physicist convert needs and requirements into class designs. Provide technical help in determining how the current CMS reconstruction software can be used as input for the new ideas.
1.6.10 Support of software help desk While there are CMS help mailing lists, there is also a need for help regarding issues specific to the US installations configurations and integration. The LPC has established a lpc-howto mailing list to fill this need and to provide cross project (OSCAR,ORCA,IGUANA) help. This list will need manpower behind it to be successful.
1.6.10.1 Helpdesk work 2005  
1.6.10.2 Helpdesk work 2006  
1.6.11 Software and Development Tools Development  
1.6.12 Online Serializer  
1.6.13 Helpdesk work 2007  
1.6.14 Event Setup M&O  
1.6.15 FrameWork IO Classes M&O  
1.6.16 EDM Service System M&O  
1.6.17 Support for ROC and LPC desktops 2007  
1.6.18 Software and Development Tools Development  
1.6.19 Conditions database infrastructure M&O  
1.6.20 Message Logger M&O  
1.6.21 FrameWork Classes M&O  
1.6.22 Online System Framework  
1.6.23 EDM performance and correctness testing  
1.6.24 A Framework Anaysis Tool  
1.6.25 Software Release and Support 2007  
1.6.26 Engineering support for LPC groups Provide C++ consulting and related computing professional support as requested by the LPC subgroup leaders.
1.6.27 Visualization Software Development  
1.6.28 Geometry Infrastructure M&O  
1.6.29 Software and Support Management  
1.6.30 Deploy python API  
1.6.31 EDM Class M&O  
1.6.32 Parameter M&O  
2 Major MileStones of 2005  
2.1 US-CMS Participation in Computing TDR  
2.2 Analysis Environment for PTDR  
2.3 Data Management Prototype Release  
2.4 Production System for DC06  
2.5 US-CMS Contribution to Magnet Test  
2.6 25% Complexity  
3 Software Engineering Support at CERN Software Support by US-CMS at CERN
3.1 Optimization Task Force  
3.2 Tier0 Workflow Management  
3.3 Optimization Implementation  
4 Major MileStones of 2006  
4.1 Production for CSA06  
4.2 Service Challenge 4  
4.3 Computing Software and Analysis Challenge (CSA06)  
4.4 Transition to New Framework  
4.5 50% Complexity  
5 Major MileStones of 2007  
5.1 Start of Global Data Taking  
5.2 Production for Physics Notes  
5.3 Computing Software and Analysis Challenge (CSA07)  
5.4 Magnet Test and Cosmic Challenge 3  
5.5 Facilities Ready to accept data  
5.6 Start of the Pilot Run  

Version 1.0 - Created on 2008-02-10 21:01 with TaskJuggler v2.3.0