An L1 Skim config setup to save events that pass the L1 Physics trigger bits based on the new L1 Menu.
The targeted environment is Lxplus and assumes a standard session:
ssh -XY <username>@lxplus.cern.ch
Setup the environment according to the official instructions.
cmsrel CMSSW_15_0_3
cd CMSSW_15_0_3/src/
cmsenv
git cms-init
git cms-checkdeps -A -a
git cms-addpkg L1Trigger/L1TGlobal
mkdir -p L1Trigger/L1TGlobal/data/Luminosity/startup
cd L1Trigger/L1TGlobal/data/Luminosity/startup
wget https://raw.githubusercontent.com/cms-l1-dpg/L1MenuRun3/refs/heads/master/development/L1Menu_Collisions2025_v1_0_0/L1Menu_Collisions2025_v1_0_0.xml
wget https://raw.githubusercontent.com/cms-l1-dpg/L1MenuRun3/refs/heads/master/development/L1Menu_Collisions2025_v1_0_0/PrescaleTable/UGT_BASE_RS_FINOR_MASK_L1MenuCollisions2025_v1_0_0.xml
wget https://raw.githubusercontent.com/cms-l1-dpg/L1MenuRun3/refs/heads/master/development/L1Menu_Collisions2025_v1_0_0/PrescaleTable/UGT_BASE_RS_PRESCALES_L1Menu_Collisions2025_v1_0_0.xml
cd -
git cms-addpkg L1Trigger/Configuration
★ Edit the file L1Trigger/Configuration/python/customiseUtils.py by changing the L1TriggerMenuFile:
- process.TriggerMenu.L1TriggerMenuFile = cms.string('L1Menu_Collisions2022_v1_1_0.xml')
+ process.TriggerMenu.L1TriggerMenuFile = cms.string('L1Menu_Collisions2025_v1_0_0.xml')
scram b -j 8
Now you are ready with the L1 setup.The next step is to git clone the L1Physics Skim repository and run the L1emulation using cmsDriver(eg.running on Zero Bias Data for 2022G).
git clone https://github.com/sanuvarghese/L1PhysicsSkim.git
scram b -j 8
cd L1PhysicsSkim/L1PhysicsFilter/test/
The L1 Skim should be run either on Zero Bias samples or MC. Do not run the skimmer on EphemeralHLTPhysics dataset because an L1 menu is already applied on them.Here we will be considering ZB. Since most(if not all) Zero Bias Datasets are not available locally on eos, you need to create your own list_cff.py containing the paths of the runs you are considering from DAS. You can obtain the file names directly from the command line using dasgoclient query
voms-proxy-init --voms cms --valid 168:00
cp /tmp/x509up_<user proxy> /afs/cern.ch/user/<letter>/<username>/private/
dasgoclient --query="file dataset=/EphemeralZeroBias0/Run2024I-v1/RAW and run=386615" > ZB1.txt
You need to repeat it for EphemeralZeroBias{1-8} and combine the file paths into a single txt file
(eg $ cat ZB1.txt ZB2.txt .. ZB8.txt > ZB.txt)
Next step is to create a list_cff.py file in the format (edit the ZB.txt accordingly and rename to list_cff.py)
inputFileNames=[
'/store/data/Run2024I/EphemeralZeroBias0/RAW/v1/000/386/615/00000/67b4f3ff-ae49-4d3c-9357-fd7b13e42745.root',
]
As an example, the list_cff.py for the EphemeralZeroBias samples for run 323755 is already available in the test directory.
The L1T emulation is invoked via cmsDriver.py command. for more deatils about cmsDriver and its options, follow this twiki .
cmsDriver.py l1Ntuple -s L1REPACK:uGT --python_filename=data.py -n 500 --no_output --era=Run3 --data --conditions=140X_dataRun3_HLT_v3 --customise=L1Trigger/Configuration/customiseUtils.L1TGlobalMenuXML --filein=/store/data/Run2022F/EphemeralZeroBias0/RAW/v1/000/361/468/00000/52351179-2329-47d8-bffc-a01833bb1704.root --nThreads=4 --processName=HLT2
If you get an "import commands" error(You will get this error if you use 12_0_X, taken care of in higher releases),replace the "import commands" line in L1Trigger/Configuration/python/customiseUtils.py with "import subprocess" ("commands" is deprecated for python 3 but I dont think it is used anywhere!)
Note that our purpose here is not to get the Emulated L1 Ntuples, but to get the data.py config file on which we will apply the L1 Skim Filter(which is why we omitted the --customise=L1Trigger/L1TNtuples/customiseL1Ntuple.L1NtupleRAWEMU option).
Add the following lines at the end of the newly created data.py config file
process.load('L1Trigger.L1TGlobal.simGtStage2Digis_cfi')
process.load('L1Trigger.L1TGlobal.hackConditions_cff')
process.L1TGlobalPrescalesVetosFract.PrescaleXMLFile = cms.string('UGT_BASE_RS_PRESCALES_L1Menu_Collisions2025_v1_0_0.xml')
process.L1TGlobalPrescalesVetosFract.FinOrMaskXMLFile = cms.string('UGT_BASE_RS_FINOR_MASK_L1MenuCollisions2025_v1_0_0.xml')
process.simGtStage2Digis.AlgorithmTriggersUnmasked = cms.bool(False)
process.simGtStage2Digis.AlgorithmTriggersUnprescaled = cms.bool(False)
process.simGtStage2Digis.PrescaleSet = cms.uint32(1) #1 corresponds to Prescale column at 2e34 (At the moment,It is advised to run th>
process.simGtStage2Digis.resetPSCountersEachLumiSec = cms.bool(False)
process.simGtStage2Digis.semiRandomInitialPSCounters = cms.bool(True)
cmsRun runFilter_cfg.py
You will get as output the Skimmed File L1.root.
Verify that the filter actually worked!
edmFileUtil L1.root
Create an output directory for your future root files:
mkdir -p /path/to/output/dir
Change the nEvents in runFilter_cfg.py to -1
Create condor jobs for data by running the cmsCondorData.py script, for MC by running the cmsCondorMC.py(will be available soon). There are 3 mandatory arguments and 3 options:
- The 1st argument is always runFilter_cfg.py.
- The 2nd argument is the path to the top of your CMSSW release.
- The 3rd argument is the path to the output directory for your root files.
- The -n option allows you to set the number of input root files processed per job.
- The -q option allows you to set the "flavour" of your job. Each flavour corresponds to a different maximum running time. The default is "workday" (= 8h)(It is recommended to set n = 10-20 and flavor=workday to get decent stats for each output root file).
- You can use the -p option to attach your grid proxy to your jobs (specify the path to your proxy after -p).
./cmsCondorData.py runFilter_cfg.py <path to your CMSSW src directory> <path to your output directory > -n 20 -q workday -p /afs/cern.ch/user/<first letter>/<username>/private/x509up_<user proxy>
Submit All Jobs on Condor
./sub_total.jobb
When the Jobs are done, New Filtered Raw root files (containing only events that pass the L1 Trigger with the new menu named L1_0.root, L1_1.root etc) will be produced in the output directory.