CDMS Event Builder


Event building refers to the process of assembling data from the various components of an experiment following a trigger, and writing that data to permanent storage. For CDMS, the principle components are the digitizers which record the phonon and charge channels of our detectors, digitizers which record the stretched signals from the muon veto, and various digital history buffers and scalars.

The schematic below shows some of the components involved with event building on CDMS. Joerger VME digitizers are used record the phonon and charge channels. Bit3 VME-to-PCI bridges are used to move data from the VME digitizers to "crate PC's". The crate PC's send data via gigabit ethernet to the Event Builder PC. When a trigger occurs, each crate PC waits for the digitizers in its VME crate to finish. Then, the recorded data are DMA'd to local memory, reformatted into CDMS standard record format, and sent as event fragments to the Event Builder. The Event Builder assembles complete events, consisting of these VME subevents plus other data (veto digitizer readouts, asynchronous monitoring data, error data). These events are accumulated in event files on disk. These files are written to tape at set time intervals and/or after specified event counts.

The critical components of event building for CDMS are the device drivers and software used to move data from the digitizers to the crate PC's, the gigabit ethernet network used to move subevents to the Event Builder, and the event building software which assembles and logs the events.

VME Software

To control and access the data on the digitizers and other VME-based electronics, we use a VME-to_PCI bridge manufactured by SBS. The specific adapter is the Model 620. The programming manual is available here.

Most of the VME interfaces used by CDMS support two VME addressing modes: short I/O, or A16 addressing, and A24 or A32 addressing. Short I/O space is used to access control registers, and A24/A32 are used to access data memory on the cards.

In our Linux driver, we support general access to short I/O space via the mmap system call, executed against an fd resulting from opening /dev/bit3. All of short I/O space is available with this call, so care should be taken to restricting access to the specific registers used by a given card. Here's a small example of code illustrating how to access a CSR on a Comet digitizer, with a base address of 0xA000:

  int mfd;
  unsigned char *physMem;
  unsigned char *csrh;

  mfd = open("/dev/bit3", O_RDWR);
  if (mfd == -1) {
    perror("open /dev/bit3");

  physMem = (unsigned char *)mmap(NULL, 4096, PROT_READ | PROT_WRITE, 
				  MAP_SHARED, mfd, 0x0000a000);
  printf("physMem = 0x%0x\n", physMem);
  if ((unsigned long)physMem == -1) {

  csrh  = physMem;
  printf("Comet register dump:\n csrh  = 0x%0x\n", *csrh);

Access to A24/A32 data space is implemented in specific drivers. We currently have drivers for Comet and Joerger digitizers. The interface to these drivers is similar. First, the user opens a device corresponding to the digitizer, for example, /dev/joerger2. After a trigger, the user waits for the digitizer to complete its sampling (by reading the appropriate CSR). Then, an ioctl against the device is issued, giving a bit mask specifying which channels to transfer from VME to PCI, and the first and last sample numbers of the transfer. Finally, an mmap system call is given for each channel; the resulting pointer is dereferenced to access the individual samples.

In the drawing above, our VME software is used to move data from the VME digitizers into the crate PC's. The data from a given crate forms a subevent. A CDMS event consists of the assembled subevents from each crate, together with other information such as asynchronous monitoring data.

R2DM (Event Builder) Software

The process of assembling subevents into full events is called event building. Most high energy physics experiments require event building, and we have been able to re-use a very general event building software framework from Fermilab's Run II experiments (CDF and D0). This framework is called R2DM. Considerable documentation about R2DM is available here.

R2DM is written in C++. It consists of a number of C++ classes which define the logical pieces of an event builder: Subevent, Subevent Collector, Event Collector, EventManager, EventTimer, EventAssembler, EventDispatcher, ConfigManager, StatisticsCollector, ErrorReporter, StateManager. By itself R2DM is incomplete - it is simply a framework. To use the framework, we write a new set of C++ classes which inherit from the base R2DM classes and which implement details specific to the experiment. As examples, we define a CDMS_Subevent, which describes how CDMS events are organized. We create a CDMS_EventDispatcher, which implements how our data files are organized and written to disk. We define a CDMS_EventAssembler, which details how subevents are concatenated together.

R2DM is built on top of several other products:

The use of these Fermilab-supported products enables R2DM to abstract communications between computers and allows essentially arbitrary scaling of an event builder. The requirements from the Run II experiments on R2DM are considerably greater than those of CDMS - for example, both experiments use on order 100 VME crates to collect subevents.

Test Results

In terms of deadtime, the most time-critical pieces of event building on CDMS are the transfer of digitizer data to the crate PC's, and the subsequent transfer of the reformatted data (subevents) over gigabit ethernet to the Event Builder PC. In the simplest implementation, these are serial processes - that is, all digitizer data from a given crate are transfered, then transmission over ethernet occurs. Rearming occurs after the completed event is successfully written to disk.

VME to PCI Transfers

The graph below shows the performance of DMA transfers from our Joerger digitizers to a crate PC, as a function of transfer size. The anticipated transfer size for each channel of the digitizer is 2000 samples. Each sample is 2 bytes in size. By design the Joerger interleaves sample pairs in memory, so data transfers always move two channels of data simultaneously. So, the expected transfer rate of approximately 18 MB/sec is found on the graph at a transfer size of 8000 bytes (2000 samples X 2 channels X 2 bytes/sample).

Gigabit Ethernet Transfers

To determine transfer rates over gigabit ethernet, we cabled two PC's back-to-back and transferred subevents of various sizes using a version of R2DM which pulled data "out of the air" (i.e., simply allocated a subevent size block of memory and transfered the random data found there) and dropped the data "on the floor" (i.e., the event collector simply received and acknowledged the subevents, and ignored the data). For subevents of size 32 KBytes (8 Joerger channels, 2K samples per channel) R2DM sustained rates of approximately 80 MB/sec (Linux 2.4.4 kernel).


If transfers are treated serially (VME to PCI, followed by PCI to PCI over ethernet), our tests predict an end-to-end rate of 14.7 MB/sec (18 MB/sec added in series to 80 MB/sec). This exceeds the CDMS specification of 10 MB/sec throughput from VME to disk. This is an over-optimistic estimate, since: If these affect livetime excessively, the following solutions are available: