Tile Calorimeter: Experience with read-out, DCS and online databases: Tile Calorimeter: Experience with read-out, DCS and online databases Carlos Solans
IFIC - Universitat de Valencia
On behalf of the TileCal community
ATLAS Overview Week - 3rd october 2006
Outline: Outline DAQ
Databases for DAQ
DVS tests
ROS integration
DCS
Databases for DCS
DCS remote access
Combined run with MDT and RPC
Combined run with LAr
Conclusions
ROD TTC DCS MON ROS CASTOR PVSS
Configuration Database COOL OKS
Configuration
Database Oracle
archive DB read FE Configuration Busy DB write COOL COOL Athena Schematic view of Read-out DCS and databases for the Tile Calorimeter Dataflow Offline flow DCS DAQ OFFLINE
DAQ: DAQ Objective
Have a personalized DAQ software for the Tile Calorimeter.
Always use the latest TDAQ version available in Point1.
Status
IGUI: We read/publish our own information to the IS (Information Services) for run configuration.
ED (Event Dump): We display Tile data in user defined panel.
ROD monitoring: We produce test histograms for the moment.
Gnam: We produce histograms at ROS level.
Event Filter: We produce histograms at EB level running HLT.
RCD controllers for ROD, TBM, TTCvi, … to interact with the hardware.
Experience
Reading 50% of the long barrel.
Taking physics and calibration runs everyday
Using ROD and ROS systems
Transfering data to castor ED user defined panel Number events vs
ROD input channel Gnam monitoring histogram Event Filter ATLANTIS
event display IGUI user defined panel IGUI main window
Databases for DAQ: Databases for DAQ Configuration Database (OKS) via RDB
Objective
Describe the hardware and software used for the DAQ of the Tile Calorimeter.
Status
Only half of the detector described
Size: 2.5 MB
Number of objects: ~1200
Number of segments: 33
Experience
We are using OKS2COOL to store to COOL a copy of the Database used for the run.
Everyday we produce at least 1 version due to the flexibility of the Database.
Commissioning Databases
Objective
Provide overview and storage of the commissioning status.
Status
Run Information: MySQL Database storing own run information.
Elog: Electronic logbook storing activities in Point1.
WIS: Web interface for shifters
TileComm Alalysis: Commissioning results storage.
Experience
We have a everyday picture of the commissioning Run Information Database ELOG Web interface for shifters ROD TTC MON ROS OKS
Configuration
Database Schematic view of the uses
of the Configuration Database Tile Comm. Alalysis
Databases for DAQ II: Databases for DAQ II Optimal Filtering Constants for ROD
Objective
ROD loads on prepareForRun transition the optimal filtering constants to be used in physics runs.
The constants are stored in a file.
The location of the file is defined in the OKS Configuration Database.
The file is written from the values extracted from COOL.
Values in COOL are be updated after a calibration run and offline computation of the optimal filtering constants.
Granularity of the data should be: One set of optimal filtering constants per Super Drawer.
Status
We can store the values into COOL and read them from Point1.
The experience is complicated.
Need to use both offline and online Databases software. ROD MON ROS CASTOR COOL OKS
Configuration
Database Athena Schematic view of the ROD Optimal Filtering Constants path Data volume for whole Tile (Optimal Filtering no iterations)
6.4 kB/Super Drawer * 256 Super Drawers = 1.6 MB
DVS tests: DVS tests Objectives
Trace problematic super drawer modules without running the whole TDAQ software.
Check stability of Super Drawers and LVPS
Requirements
The tests are described in the OKS configuration Database.
Results should provide quick problem spotting via
Filename tag
Colors
Experience
DVS has proven to be a stable way of running tests on multiple computers
TTC crate
ROD crate
The use of ROOT for the analysis of the results is slow.
Other optimal analysis under investigation.
Number of objects in the database will be big.
4 types * 8 sectors * 4 partitions = 64 objects for ROD tests.
4 types * 64 modules * 4 partitions = 512 objects for integrator tests.
CIS sample Noise DVS GUI
ROS integration: ROS integration Objectives
Use ROS system for commissioning
Slowly test high rate acquisition.
Status
Data taking at 1kHz
Keep ROS upgraded to latest firmware
Always using latest TDAQ release.
Experience
Straight foward integration
64 ROLs needed for whole Tile Calorimeter ( 64 ROLs * 4 Super Drawers / ROL = 256 Super Drawers)
16 ROLs being used ( 64 Super Drawers )
only found one dirty ROL, which was fixed by cleaning.
High rate tests: 30kHz reached.
PU ROBin PU ROD ROS Super Drawers ROL Schematic view of the ROS integration for TileCal 4 Super Drawers = 1 ROL ROL data bandwidth at 1kHz
Physics: ~180 kB/s
Calibration: ~300 kB/s
DCS: DCS Currently the PVSS project running in the computers in USA15 control
LVPS
Cooling of Super Drawers.
The PVSS project was re-started from scratch around march.
New features:
View of the whole side of the barrel.
Yellow colouring of the LVPS when powered but not correctly trimmed.
A software still under development
Next steps:
control the HVPS from the same project.
Comunication with DAQ. DCS LVPS PVSS project
Databases for DCS: Databases for DCS Configuration Database
Objectives
Store configuration constants for the LVPS in the Database.
Access the Database from the Lab in building 512 and from USA15.
Status
Already working for the long barrel (sides A and C)
We can store and retrieve calibration constants for the LVPS from the Oracle Database using JCOP Framework.
Experience
8 calibration constants / LVPS * 64 LVPS installed = 512 calibration constants already in the Configuration Database.
Next step
Storing and retrieving configuration constants (DAC values) for the LVPS. Also 8 DAC values/LVPS.
PVSS
Configuration
Database LAB
Build 512 USA15 Configuration Database for PVSS Data volume for whole Tile
16 constants / LVPS * 256 LVPS = 4096 constants
Databases for DCS II: Databases for DCS II Conditions Database
Objectives
Replace home made tool for ROOT analysis used to check the stability of the LVPS everyday.
Store DCS data into Oracle archive.
Copy part of the Oracle archive data into COOL for offline analysis.
Status
We have Oracle archive working for long barrel side A.
We are already producing 12M entries a day.
Experience
Performance problems using external Database access for PVSS ( too much CPU load).
Next steps
Move data from Oracle archive to COOL.
This information has been decided but not tried yet. Online Offline DCS TEXT FILES Home made Conditions tool for DCS LOCAL
SQL SERVER
DCS remote access: DCS remote access Objective
Control and monitor the DCS PVSS project from the distance.
Similar as the TDAQ software components ( igui_start, is_monitor, …)
Start using the satellite control room.
Experience
From Linux: Remote desktop access to cerntsatldcs01 allows to visualize the PVSS project and interact with it.
Next step
From Windows: Create a user interface panel that would connect to an existing PVSS project already running in another machine. Remote desktop access to a PVSS project
Combined run with MDT and RPC: Combined run with MDT and RPC Readout
Tile LBC13-20 and LBA45-52.
Using ROD and ROS systems for Tile
Trigger
RPC trigger from Sector 13
Rate ~ 10Hz
Trigger/busy signal distributed from CTP
No BCID synchronization due to different handling of the BCID reset for each system.
Tile: BCID reset every orbit
Muon: Free BCID
Experience
We were able to see in the Tile, muons that were triggered by the RPC (low energy muons)
We need a mechanism to synchronize with the muons.
Saw muons using GNAM monitoring at ROS level.
Very little time to run. Offline Event Display for run with muons
Combined run with LAr: Combined run with LAr Readout
Tile LBC13-20 and LBA 45-52
Using ROD/ROS for both Tile and LAr
Trigger: Using Tile trigger towers
4-Top: LBC 15,16,17,18 4-Bottom: LBA 47,48,49,50
We lost HV in LBC18, LBC16 tripped HV, and now we lost LBC17
2 muons / minute ~ 100% purity
Using special coincidence boards
Synchronization via master-slave LTP chain
Experience
Configuration databases integration issue:
After changing something in segments&resources panel we had to commit the changes several times.
Sometimes it timed out, probably the RDB servers takes more time than what is specified in the timeout parameters.
Online histogramming and event display tested successfully.
Constant difference in BCID (26 BC) for events comming into EB from Tile and Lar segments.
Indicates that all subdetectors should have the same policy for the BCID resets, i.e. at the same time after the orbit clock.
Data could be reconstructed offline successfully. Event display for combined run with LAr Energy deposition in TileCal Tower (MeV)
GNAM monitoring at ROS level
Conclusions: Conclusions DAQ status for the Tile Calorimeter is close to ATLAS operation.
ROD and ROS systems being used daily.
DCS is evolving in the good direction, more development has to be done.
Starting to use online Databases.
Combined runs, specially with LAr, have been very encouraging for the future.
Looking foward to new combined runs as soon as possible.
General policy for BCID reset handling should be defined.
Backup slides: Backup slides
Read-out status: Read-out status Reading 50% of the long barrel
Using ROD and ROS systems
Using TDAQ software for Read-out
Always the latest version available in Point1
Predefined set of tests
Predefined list of runs
Data transfer to castor
Running at 1kHz
Using PVSS project for LVPS control
DAQ OKS Database: DAQ OKS Database The TileSegment structure is still runtime dependent.
Many changes can be done to the segments to have diferent hardware running.
We should think how to remove the runtime dependencies which should not be copied to oracle from the constant structure. September 2006
WIS and TileCommAnalysis: WIS and TileCommAnalysis
LTP chain:: LTP chain: Tile Barrel A LTP master, drives 4 salves.
Long cable between Ext Barrel and LAr.
L1As on channel 0, as will be from CTP.
Orbit drives BCR, difference without adjustment: 26 BC. When trying to adjust inhibits/delays, get difference to 0, but sometimes miss a full orbit.
Next steps understand TTCvi settings, then move to CTP.
Combined runs milestones: Combined runs milestones August 24: first combined cosmic ray muon run LAr + TileCal
Following tests of combined DAQ runs with pedestals
LAr:
HV system @ 2 kV for many hours (overnight)
LVPS operation
Final services
TileCal:
LVPS electronics noise level back to nominal
After addition of ferrite coils
Muon signal clearly visible
Final services
First use of L1 Calo trigger patch panels
Trigger via master-slave LTP chain
(also used CTP for TileCal + MDT + RPC)
Readout via final ROD/ROS
Combined DAQ partitions, TDAQ-01-06-00
Following previous tests of combined pedestal runs
See talk in TMB by H.Wilkens & O.Solovianov
Trigger via TileCal Chicago coincidence trigger boards
Typical rate 2 muons / minute, purity ~ 100%, 10 K evts
Central operation from control room
Online monitoring
Real-time event display in control room
Real-time display of Landau peak from muon energy loss in TileCal
Thanks to efforts & collaboration of many LAr + TileCal colleagues
Energy (MeV) online
First Combined TileCal + LAr cosmic run: 24-Aug-06: First Combined TileCal + LAr cosmic run: 24-Aug-06 TileCal Trigger: (single-tower) 1-Top: LBC 15, 17, 16 (no HV), 18 (BCID probl.) readout LBC13-20 4-Bottom: LBA 47,48,49,50 readout LBA 45-52
Ramp LAr HV, start trigger, time in LAr pulse, try combined run, test monitoring, reconstruct offline, … 1 2 3 4 A B 4 SD 4 SD LAr crate with
LVPS bottom LBA Tile LBA LAr A-side
6/8 modules
triggered
16 read out
Data volume rates at 1kHz: Data volume rates at 1kHz Physics
ROD input from Super Drawers
0xB6 words * ¼ B/word * 1kHz = 45 kB/s
ROD ouput to ROS Read-Out Link (ROL)
4 SD/ROL * 45 kB/s = 180 KB/s
Calibration
ROD input from Super Drawers
0x116 words * ¼ B/word * 1kHz = 69 kB/s
ROD ouput to ROS Read-Out Link (ROL)
4 SD/ROL * 69 kB/SD = 276 KB/s
Optimal Filtering Constants : Optimal Filtering Constants No iterations
32 phases * 34 words/phase*channel * 48 channels/module = 51kbits/module = 6.4 kB/module
6.4 kB/module * 256 modules = 1.6 MB