SciELO - Scientific Electronic Library Online

 
vol.28 issue2EFFECT OF THE CONCENTRATION OF IRON IN THE PROPERTIES OF Nb2O5 WITH POSSIBLE PHOTOCATALITIC APPLICATIONSPACE-FREQUENCY DESCRIPTORS FOR AUTOMATIC IDENTIFICATION OF TEXTURE PATTERNS USING SUPERVISED LEARNING author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Ciencia e Ingeniería Neogranadina

Print version ISSN 0124-8170

Cienc. Ing. Neogranad. vol.28 no.2 Bogotá July/Dec. 2018

https://doi.org/10.18359/rcin.2854 

Artículos

CONFIGURATION OF OPERATION MODES FOR A SCINTILLATING FIBER SUB-DETECTOR IN THE LHCb EXPERIMENT

CONFIGURACIÓN DE LOS MODOS DE OPERACIÓN PARA UN SUBDETECTOR DE FIBRAS CENTELLEANTES EN EL EXPERIMENTO LHCb

Tomás Sierra-Polanco* 

Diego Milanés** 

Carlos E. Vera*** 

* Electronic Engineer, Master's in Science, Physics Associate Professor. Universidad del Tolima / Universidad de Ibagué. E-mail: tomassierrapolanco@gmail.com. ORCID: 0000-0001-5765-8928

** Physicist, PhD, Staff Professor. Universidad Nacional de Colombia. E-mail: damilanesc@unal.edu.co. ORCID: 0000-0001-7450-1121

*** Physicist, PhD, Staff Professor. Universidad del Tolima. E-mail: cvera@ut.edu.co. ORCID: 0000-0002-33374362


ABSTRACT

Data acquisition boards will take advantage of the LHCb upgrade between 2018 and 2019 during the Long Shutdown 2 of CERN experiments. These improvements aim to the configuration and re-structuring of data acquisition techniques in terms of increased luminosity and its current center-of-mass energy. Accordingly, we documented the current condition of the detector, its acquisition techniques and protocols. This paper will emphasize the Scintillating Fibers (SciFi) detector, which is one of LHCb's future sub-detectors that is in charge of trace pattern recognition by recording significant events in data transmission. It will also show step by step the modifications made to the codes to move from a Standard Mode to a Wide Bus Mode, increasing data rate by reducing control bits. This improvement will enlarge the amount of analyzable event data.

Keywords: code configuration; LHCb; particle detectors; scintillating fibers; VHDL programming

RESUMEN

Las tarjetas de adquisición se servirán de la actualización del LHCb, que tendrá lugar en el periodo entre 2018 y 2019, durante el Long Shutdown 2 de los experimentos del CERN. Estas mejoras apuntan a la configuración y reestructuración de las técnicas de adquisición de datos, debido al incremento en la luminosidad y correspondientemente con su energía de centro de masa actual. Por lo tanto, se documenta la condición del detector, sus técnicas de adquisición y sus protocolos. Se hará énfasis en las Fibras de Centelleo (SciFi), uno de los futuros subdetectores del LHCb, encargado del reconocimiento de patrones de trazas, basado en los cruces o impactos que ocurren sobre sus fibras, además del registro de eventos significativos en la transmisión de datos. Este artículo presenta paso a paso las modificaciones aplicadas a los códigos para pasar del Modo Estándar al Modo de Bus Ampliado, incrementando la tasa de datos mediante la reducción de bits de control, para acrecentar el número de información analizable sobre los eventos.

Palabras Clave: configuración de código; detectores de partículas; fibras centelleantes; LHCb; programación en VHDL

INTRODUCTION

This article is part of the technological development and new findings in the field of high energy physics applied to data acquisition in detecting elementary particles. These developments were achieved in cooperation with CERN based on LHC experiments, particularly the LHCb, performed along with Universidad del Tolima, Universidad Nacional de Colombia and Universidad de Ibagué. The experiment took place at the LPNHE (Laboratoire Physique Nucléaire et de Hautes Énergies) in Paris, France to assist in the development of programming codes to optimize data acquisition.

In order to take advantage of the planned modifications for the LHCb upgrade from 2018 to 2019 during the Long Shutdown 2 (LS2) to improve its acquisition capacity and storage, the codes were managed for event detection. This paper presents the operation mode configurations for acquisition boards to achieve higher efficiency using shorter sampling times. The main feature of the LHCb upgrade will be an increase in the instantaneous luminosity by a factor of five compared to the current one, which is expected to improve the experiment's background, enhance the number of collisions and increase the accumulation of possible valid events [1-2]. This will optimize the experiment in the pursuit of knowledge because statistical data management will be improved, i.e. if the quantity of data is higher, then its mathematical treatment and analysis will be more accurate. So, the particle physics conclusions gathered at CERN are claimed to be supported by meaningful information.

There are several types of sub-detectors in LHCb. This paper will focus on Scintillating Fibers (SciFi) as one of the new intermediate layers of the future detection system. The SciFi is in charge of trace pattern recognition based on the hits it receives and the recording of significant events in data transmission. The proximity of detection layers to the beam-pipe determines their event density. This means that there must be more than one operation mode because of the location of each Scintillating Fiber module, according to their occupancies [2-3]. Because of this, an alternative operation mode will be tested to check how it works, as an improvement of the standard one. This premise will be discussed shortly.

Code configurations were executed in the VHDL programming language used in CERN's experimental processes due to its low sampling times. This language is useful because of the velocities-nearly the speed of light-operating at CERN. Thus, it is possible to collect more data and, if more data are collected, the statistics issue discussed above will be easier to solve [4-5]. The experimental factor is decisive in scientific theory consolidation, which is the raison d'être of specialized research centers in different areas. Consequently, this practice must take place at CERN's laboratories with the LHCb's collaboration [6-7-8-9-10].

Two operation modes were studied in this experiment, which are Standard Mode and Wide Bus Mode. Each has different useful features to detect and recognize events. The difference between them is the distribution of bits in the transmission and reception frames. The Standard Mode has a greater number of control bits and the Wide Bus Mode has a greater number of event data bits. These modes have 120 bits per frame, distributed into four sections: Header (H - 4 bits), to recognize the first bits in a frame for synchronicity and identification; Slow Control (SC - 4 bits), implemented in processes without significant recognition times; User Data (D - depends on the use), for event transmission; and Extra User Data (ED - depends on the use), as shown in Fig. 1[11-12]. ED could be implemented as a control scheme instead of increasing User Data. Because of SEUs (Single Event Upsets), mode setting is defined by the radiation levels to which they are exposed.

Fig. 1 Encoding frame. Adapted from CERN/LHCb, 2014 [11] 

The aim of the LHCb experiment is to study CP violation regarding the decays that occur in flavor-changing particles based on the CKM matrix. These studies need data acquisition methods. Most of them are studied by B physics, in which the probability of any event leading to conclusions toward matter-antimatter asymmetry is higher than in others, considering less expensive operations [13-14-15]. There are different types of rare decay channels to study heavy-flavor physics because of the transformation of B mesons into other particles. Their lifetimes are extremely short, so a vast sampling frequency is needed for trace reconstruction [7-16-17-18-19]. It is advisable to have a lot of data to analyze the particle's behavior since plentiful data can yield more reliable statistics. Sub-detectors are perpendicular to the incident beam-pipe, making the study of electro-weak interactions, charge currents and QCD easier [6-7-18].

Due to increased luminosity in the LS2, acquisition data boards have to be improved not only in their hardware, but in their inner codes. Currently, for RUN2, luminosity of 8 fb-1 is expected by adding up RUN1 and RUN2 results of 3 fb-1 and 5 fb-1, respectively. This project is part of the LHCb upgrade, in which integrated luminosity is expected to reach 50 fb-1, with instantaneous luminosity of 2x1033cm-2s-1 for RUN3. If luminosity is greater, then the probability of finding valid events is higher. In RUN2, sampling frequency is 1 MHz, but because of increased luminosity, it is necessary to enhance sampling frequency to over 40 MHz; i.e. one data frame every 25 ns, which will improve the decay channel range. These changes are important in the trigger system due to the implementation times; that is, if data acquisition velocity is higher, then a trigger should be managed to maintain robust control of the events and their classification [1-3-20]. This matter will be discussed in the next chapter.

The document is divided into three chapters, which describe the procedure of the project. The introduction explains the purpose of the changes and the tools that have been considered for their consolidation. Chapter one, "Materials and methods", is a study of the Scintillating Fiber sub-detector and the digital electronics that has been used, its operation modes and respective modifications. The second chapter, "Results and discussion", describes in detail the programming configurations. The results are explained in an understandable way, so it is not mandatory to check the code itself. Finally, the conclusions gather the hypothesis, background and results.

2. MATERIALS AND METHODS

The boards used in the transmission and reception were Stratix V GX, which are FP-GAs equipped with high-velocity optical fiber connections from Altera. The Stratix V GX is radiation efficient and, thus, it is built as an ASIC [3-21-22]. To handle the new sampling frequency, it is necessary to implement a new tracker in the intermediate stages of detection, which introduces the SciFi in this upgrade. Furthermore, the SciFi will aim to improve the detector's acceptance. The SciFi is composed of Silicon Photomultipliers (SiPM) built with 2.5 m 250-pm diameter fibers, arranged in parallel so that, when they get in contact with a particle trace or any event, they will generate an oscillating signal using a PLL for better definition and precision. The SiPMs are found inside the Readout Boxes on the top and bottom of the detector where pixels are located [1-3-20-23-24-25]. All the modifications in this project were made for this sub-detector. For this purpose, many protocols and sub-routines have been changed.

The signal trigger improves and affords a higher quantity of data recording from the boards and storage resources. For that purpose, a Low-Level Trigger (LLT) has been considered for implementation. This LLT determines readout capacity and utility of events for better storage space in the buffers by disregarding first-trigger decisions that do not seem to correspond to relevant data [3-20-26-27].

2.1. Scintillating fibers

Considering the difficulties for big component transportation into LHCb caverns, a modular division must be made to construct the SciFi. This is built in three stations with four layers per station, and each layer with four quadrants. Each quadrant is composed of 96 SiPM arrangements, with 128 channels per arrangement, for higher precision in event location. The SiPMs are solid-state devices for trace detection, which are considered active elements because they produce an optical oscillating signal [1]. The SiPM modules are built with two matrices of 64 silicon channels coupled in packages. Each pixel's size is 57.5 x 62.5 and, since the channel has 96 pixels, its dimension is 0.25 mm x 1.5 mm depending on the pixel to be implemented [1-28-29]. These SiPM arrangements are 2.5 m long, so they can be organized as a 5 m x 6 m module for better acceptance. SciFi characterization is defined by simulation processes and commercial photomultiplier implementation in order to consider any advantages or disadvantages in detection [23-24-30].

As can be noted in Fig. 2, the noise corresponds to after-pulse, one of the less desired and more frequent signals in the background noise for the SiPM due to the emission of a photoelectron. One of the most important mechanisms where these phenomena occur is residual gas ionization, leading to the acceleration of a photoelectron within photomultipliers. The back-scattered electrons also generate after-pulses in the dynodes (photomultiplier tube electrodes), which come back after traveling for a while inside the SiPM [30-31]. SciFi layers are equidistant from one another inside the global system for better detection resolution [1].

Fig. 2 Scintillating Fibers arrangement with occupancy levels versus SiPM ID chart. Adapted from CERN/LHCb, 2014 [30] 

Each station is distributed into four detection layers whose given coordinate measures (X,U,V,X) for the SiPM are oriented as (00,+5°,-50,0°) with respect to the y axis for better precision in data acquisition. The first and last layers do not show any inclination, whereas those in-between are +5° and -5°, respectively. Their inclinations allow a more detailed observation of the actual points where traces occur by operating their intersections and comparing those results with the stations close to each channel.

The size of each SciFi sub-module is 540 mm x 4,835 mm, which are displayed in Fig. 3[1-23]. The signal of one particle into the SciFi is typically registered by two or more detection channels and a clustering algorithm is necessary to combine the signals of those channels because each pixel can detect only one photon. Collisions are detected in different layers of the experiment, each of them with a specific task in this complex endeavor. On average, there are 1.6 Bunch Crossings (BCX) per collision, even though it is expected to be between 3.8 and 7.6 BCX [6].

Fig. 3 The dimensions of a module, as described in the simulation, and the definition of stereo angles. The size of the dead material is increased to make it visible. Adapted from CERN/LHCb, 2014 [1] 

2.2. FE - BE communication through the GBT

Data acquisition in CERN is gathered using VHDL because of its advantages in the sampling and efficiency of code load. The Stratix V GX works under the name MiniDAQ and is made up of structures known as AMC40. These boards are part of the three stages in data transmission and reception at the LHCb cavern: Front End (FE), Back End (BE) and Gigabit Transceiver (GBT - Gigabit Bidirectional Trigger and Data Link) [32-33]. The codes in the final programming system could be written in different languages such as Verilog or C but, in the end, the final code is translated into VHDL. There are two centralization deposits, GIT and FORGE, which build the codes in collaboration with people around the world [12-34-35].

There are two big detection parts in LHCb: On-detector and Off-detector, as shown in Fig. 4. The FE is located in the On-detector and represents the interface between the experiment and data transmission. The Off-detector is in the control rooms where the BE is situated. It is necessary to implement ASICs (Application Specific Integrated Circuits) in the experiment due to the SEUs in the On-detector region. ASIC devices are highly robust against radiation. For the BE, COTS (Commercial Off-The-Shelf) is used to reduce costs based on their location. Their connections are made through versatile links [36].

Fig. 4 Link architecture with the GBT chip set and the versatile link. Adapted from CERN/LHCb, 2015 [36] 

There are some blocks in the codes that have to be implemented for synchronization, trigger, data acquisition and slow control (SC). Some others are used in the coupling of modules, such as TIA (Trans-Impedance Amplifier), PD (PIN Diode) and LD (Laser Driver) [36].

The control system works in the FE, BE and GBT due to transmission and reception errors. The GBT requires control systems, such as Timing and Fast Control (TFC) and Experiment Control System (ECS), for data synchronization and organization before sending the frames. The most important processes for signal transmission in 120bit frames happen in the BE: de-codification, data alignment, BCX ID, LLT and MEP (Multi Event Packet - construction of packages). The LLT is useful for collecting and regulating buffers. There is a throttle in charge of moderating data validity [37].

Communication between FE and BE through GBT occurs under 10-Gigabit Ethernet. Two links are analyzed, the uplink (from FE to BE) and the downlink (from BE to FE), which are shown in Fig. 5. The GBTIA for the downlink and the GBLD for the uplink serve as couplings. Some processes take place in-between these stages. In the uplink, the Scrambler/Encoder is implemented to achieve DC balance. The Forward Error Correction (FEC) sends the signal into the Serializer for transmission through the GBLD. Furthermore, the downlink is established through the GBTIA for coupling with the Clock and Data Recovery (CDR) as high-speed serial information. There is a De-codification/Descrambler for tasks similar to the Scrambler/Decoder, but backwards. The downlink transmits the TF-C+ECS protocols. Some versatile links are used in FE-GBT and BE-GBT interconnections using LVDS for signal control [36].

Fig. 5 GBT architecture and interfaces. Adapted from CERN/LHCb, 2015 [36] 

The FEC is made by a Reed Solomon (RS) Encoder and Decoder with double-interleaving to deal with burst errors. It has great control capacity but uses some control bits of the frame. This is a disadvantage because 32 of the 120 bits are being used for something other than event data, reducing the number of possible clusters in one frame and making it less efficient. Nevertheless, the control represents security in the transmitted bits. The Scrambler uses a balancing system that takes random values with defined patterns from Boolean operations. This module is known as Scrambling Constants, used to ensure proper distribution of 0's and 1's in the data flow. The Scrambling must take place before the RS encoder and the Descrambling after the RS decoder [36].

The GBT could be configured to be a bidirectional transceiver or a unidirectional transmitter or receiver [38]. In terms of the LHCb upgrade, there is a new TFC, named Super TFC (S-TFC), which controls every state of the readout and synchronization links and oversees valid data control and throttle management. A Stand-Alone system allows the autonomous operation of one or any group of sub-detectors, with a special operation mode independent from the others, also known as partitioning [37].

For the communication between boards, a standard module has been designed under the name of Advanced Mezzanine Card 40 (AMC40). These structures are built to organize external links and cabling for better control and synchronization in the readout. One AMC40 is composed of Stratix V GX, configuration ports, a power supply, Ethernet communication, data acquisition ports for the GBT, clock circuitry and LED user interfaces. This configuration enables easier connections and standardizes the modeling system so that the connections remain even when the operation configuration varies [12].

2.3. Operation modes

Operation modes define the implementation processes of data acquisition boards, so their configuration is of the utmost importance to this research. Current acquisition models and the changes needed to improve data collection were studied. Even though the boards have similar hardware construction, they are configured depending on their occupancies. Each of them is made with transmission times of 25 ns per frame, according to their sampling frequency of 40 MHz The principal aim of this work is to suppress the large number of control bits in the frame, so more information is sent through User Data [36].

The high data transfer control is a central feature of the Standard Mode, which can be harmful because the number of control bits used cannot be used in data transmission or clusters. Nonetheless, it is relevant due to the high radiation levels in the high occupancy regions of detectors. The bit disposition of the frame is described in Fig. 6, organized as follows: 4 bits of Header, 4 bits of SC (divided into two parts, Internal Control (IC) and External Control (EC)), 80 bits of User Data (clusters and events), and finally, a 32-bit FEC. Thus, efficiency is about 66.6 % in each cycle, which corresponds to the ratio between 80 bits of possible clusters and total number of bits (80/120). The transmission bandwidth is 3.2 Gb/s, even though the bandwidth for the total frame is 4.8 Gb/s [36].

Fig. 6 GBT frame structure. Adapted from CERN/LHCb, 2015 [36] 

The Wide Bus Mode has a different distribution. The frame is organized as follows: the same 8 control bits (Header and SC) and 112 bits of User Data. This change is relevant because its efficiency in data rate increases from 66 % to over 93 % due to the ratio between user data and total number of frame bits (112/120). Although its efficiency increases, control decreases drastically. This distribution is shown in Fig. 7. The new efficiency allows the buffer to dispose of all the idle data and enables the system to transmit a higher bandwidth of 4.48 Gb/s. With this modification, code blocks vary notably. The Header has a data valid signal which is useful in frame transmission and reception so that it can have precise processing. The signals must be DC balanced, which is one of the reasons why Scrambling must be present whether there is FEC or not. Despite the fact that the Wide Bus Mode is being used for the uplink, the downlink still uses the Standard Mode format to send control to the caverns. When it comes to uplink, the Wide Bus Mode is used for cluster transmission from FE to BE since there is no need to send clusters through the downlink [36].

Fig. 7 Wide frame format on uplink. Adapted from CERN/LHCb, 2015 [36] 

The Stratix V GX is capable of immediate simulation processes. In this respect, building data acquisition codes has the advantage of a preview analysis to fulfill CERN protocols. Test benches are proposed for simulation to estimate hypotheses about how these systems would work under protocol conditions. The simulations are generated from a CCPC (Credit Card PC) with SC by a Generic Data Generator, which sends information to the FE. This information is configured via a .txt document. Simulations are indispensable to avoid unnecessary execution outlay with the preview revision of the established formats. This configuration is made through a USB Blaster, so there is no need to disconnect the boards from the AMC40 to simulate, compile and program [36]. These boards can work as Stand-Alone to simulate, configure, control and operate small-scale tests of the FE. Test benches must have emulated data to check the proper operation of codes [12].

3. RESULTS AND DISCUSSION

This section will describe the configurations implemented in the relevant modules for data transfer. For better understanding, refer to the block diagrams of both operation modes. The Standard Mode in Fig. 8 shows the blocks in the transmission stage on the top and the reception stage below. Some of these modules were described before. The Bit Interleaver reorganizes the information for the serialization process before transmission. This information is transferred through the GBT link which is represented by the dotted line.

Fig. 8 Standard Mode GBT encoding and decoding block diagram. Adapted from CERN/LHCb, 2015 [36] 

In the reception stage, the process is completely inverted as the information has to be de-serialized. The Bit De-Interleaver reorganizes the information for RS decoder recognition. The frame enters the Descrambler to reset the signal as it was originally written into the transmission lines, according to pattern constants, so that clusters are arranged as they were first sent.

Fig. 9 shows the Wide Bus Mode block diagram with the corresponding modifications. In essence, the changes are based on the quantity of bits for clusters and control. In the blocks inside this process some changes were made and, as a result, the RS Encoder and Decoder had to be cut out. For the GBT modification, each of its parts has to be studied. This preliminary stage requires understanding and inspecting each module and subroutine which can then be implemented in the final program. Some of the modules are the product of firmware or templates in VHDL and Verilog, created by compilers such as MegaWizard. To suppress the FEC control in transmission is the principal aim of these configurations. The modifications are presented from the most general program to the least, even though the changes were made the other way around.

Fig. 9 Wide Bus Mode GBT encoding and decoding block diagram. Created by the author, 2015. 

3.1. Code modifications

One condition that must be considered during the whole process is the number of bit modifications in inputs and outputs. The input word has to be changed from i_word(83:0) to i_word(115:0), considering the non-modification of the SC. These corrections were made to all the modules and sub-routines, starting with gbt_0 which is in charge of initial values, constants and arguments.

3.1.1. General changes

Due to the size of the program, some sub-routines were used to divide the code into smaller blocks or packages. Inside these packages, some component declarations were made. Three of the fundamental packages are:

- Work.gbt_tx

- Work.multi_gigabit_transceivers

- Work.gbt_rx

These sub-codes are included as if they were libraries at the beginning of the top-level code. There are three of them: transmission, GBT and reception, one for each stage. Some other useful modules in the final code are declared as:

- Gbt_bank

- Vendor_specific_gbt_bank_package

- Gbt_banks_user_setup

- Gbt_bank_ID

These four refer to GBT, manufacturer's characteristics, adjustments to the program conditions and package identification, respectively. In Gbt_bank, data number is defined as follows: 84 for Standard Mode and an extra number of bits are added for the Wide Bus Mode. The central code, Gbt_bank. vhd, is where connections and parameters are declared, some of which are vendor-specific, in this case for Altera. Control ports are inside a GBT bank module classified into records in <vendor>_<specific>_gbt_bank_ package.vhd. There is a user-modifiable file, <vendor>_<specific>_gbt_bank_user_setup. vhd, in which adjustments are made to run with a specific board.

User setup is in charge of simple input changes. It selects the operation mode and some other features. An MGT (Multi Gigabit Transceiver) is implemented inside the GBT, defined as a hard block. The MGT is in charge of serialization in transmission and deserialization in reception. The GBT receptor aligns, decodes and descrambles the input signal.

In the construction of every code, sub-routine and module, several parameters have to be identified straight away. Two features of the modules are described and organized in Gbt_bank_user_setup_R, which are configured through two constants as follows:

- 0 for GBT_FRAME or 1 for WIDE_BUS or 2 for GBT_8B_10B

- 0 for STANDARD or 1 for LATENCY_OP-TIMIZED

These define the operation mode that will be implemented, as well as whether latency optimization will be used or not. For latency optimization, temporization resources become a critical factor due to the high number of domains in the multi-link implementation of the clock.

3.1.2. Specific changes and port description

There are two significant modules for all the codes: gbt_tx and gbt_rx, for transmission and reception, respectively. These are part of the GBT code, which is compiled over the top level. The modules will be described below.

- Transmission

VHDL programming is aided by a block representation known as .BDF files (Block Design File) making input and output code modifications easier. There is an input for the transmission block which warns the program about events. The clocks have to be declared in 40 MHz and 120 MHz for the implementation of serializing and de-serializing. The bit quantity per port was changed. The input and output signals had to be changed for intermediate variables so that they would not have any issue nor harm the functioning of the initialized vectors, such as i and o .

The GBT has a GearBox with multiplexor or demultiplexor (MUX/DEMUX) functions to prepare the signal for the MGT. The transmission block has a few sub-modules inside, the first of them is the Scrambling. It consists of a 40 MHz clock, data valid input, reset, 112 input for clusters and SC. In Standard Mode, the Scrambling is for 84 bits; therefore, it has been divided into four scramblers with 21 bits each. For Wide Bus Mode, the groups must include 29 bits for each Scrambler so that a 116-bit Scrambling is constructed. Some scrambling patterns are selected at the beginning within the Scrambling_Constants to perform bit interleaving inside this Scrambler code; i.e. a sequence of bits allowing the transmission of a balanced DC frame with a simple coding or decoding scheme in the signal. The size of each Scrambling_Constants block has been modified from Standard Mode to adjust the requirements of the Wide Bus Mode.

Inside the Scrambling blocks, several flip-flops are located with different writing and reading times for data storage. The Scrambling_Constants block only has one reset, four Reset_Pattern outputs, and a Header connected to a D flip-flop controlled by a 40 MHz clock. The Scrambling_Constants block defines the ports mentioned for Scrambler_ Reset_pattern implementation. The Header must be data valid to enable signals.

There are four Scramblers configured in the same way, arranged to divide the information into four groups of 29 bits each. Some logic functions are added to fulfill the scrambling processes inside Scrambling_Constants. At the end of the Scrambler, there are four Scrambled_word files that concatenate to make a 116-bit output. The outputs of Scrambled_word and S_Header are connected with two flip-flops, one for data and the other for header synchronization.

The second block in gbt_tx for the Standard Mode is the Encoder. This one is not included in the Wide Bus format due to its lower control operation. This RS Encoder has to be suppressed from the final code for not requiring a FEC. To understand its functioning, this block allows its suppression, but it is not a matter of simple omission. Inside the Encoder, there are two RS Encoder modules with the same structure of 44 bits each. They are split into two to become a double-error correction that consists of 4 bits each and is what makes a 16-consecutive-bit correction possible. Codified and non-codified outputs are declared for comparative purposes. Their internal processes are conducted instantiating a polynomial divider to apply the RS functions.

The next block is the Interleaver which has a "for" routine that interleaves the bits of the frame in order to encode the information and deliver it properly for serialization. The Interleaver allows the RS to process its double-web control and, even if the RS is not implemented in the Wide Bus Mode, it should be kept to prepare the bits for serial transfer.

The last module of the transmission stage is a MUX, which organizes packages of 120 bits to 40 bits at a higher transmission frequency. This is useful in the implementation of the MGT hard block for data serialization. This MUX consists of two internal modules. RW_TX_DP_RAM eases the organization of writing and reading signals for the MUX. Some intermediate signals are defined for the writing and reading processes, without intervening with the definitive ones. This block is in charge of organizing the bits in the MUX function using a 40 MHz clock according to its original sampling time of 25 ns. After this, the 120 MHz clock is useful for dividing the frame into four. Three of the four sections are where the 120 bits are located. The fourth section is created having regard to the programming logic, in which powers of two are necessary. Therefore, the last 40 bits are not read. Some counters inside the code are implemented to supervise the MUX addressing function.

The second module of the MUX is TX_DP_RAM with an input of 160 bits. This block only reads the 120 MSB (Most Significant Bits). It is run through the MegaWizard and defines the 120 and 40 MHz clocks. It is in charge of transmitting 40 parallel bits faster in order to reach the same amount of information in the required 25 ns per frame. In the MUX and DEMUX, the MegaWizard is called upon to compile specific functions inside the FPGA blocks. After the MUX, data are sent to the MGT hard block, which is in charge of converting the 40-bit signal to serial.

- Reception

The second big module of the top level is gbt_rx. In reception, as in transmission, similar processes occur with the input and output. Some intermediate variables are declared in order not to harm the initiation values during the logic operations of the code.

Some event identification flags are created inside the code for valid or idle data, so three signals-Data_valid, Start_of_packet and End_of_packet-are declared. An output with the non-corrected data is used to compare it with the corrected one in the reception process in order to define whether the received data are correct or not or, in any case, where the original data frame is needed. There is an input bit to check frame alignment. The frame passes through the MGT hard block before reception. It de-serializes the data and forms a 40-bit signal at 120 MHz so that it reorganizes it as first sent.

The first block inside the reception is the Manual Frame Alignment (MFA). All of the information goes inside the MFA in 40-bit packages through RX_Parallel after passing through the MGT receptor. Bit_Slip_Cmd moves the information in a serial line and is used to indicate where Start_Bit should feed back the inputs to synchronize the frame. The synchronization is confirmed by GX_Alignment_Done. This MFA consists of several sub-modules, starting with Modu-lo_40_Counter. It has a counter from 0 to 39 that organizes the 40-bit packages and reviews the patterns in Pattern_Search before they go into the DEMUX. Later in the process, there is a Write_RX_DP_RAM in charge of moving the information from left to right for package building. Inside this block, writing addresses are generated for reception. Here, something like the organization that takes place in the MUX happens but in reverse, where the bits are organized into groups of 40 by shifting the original signal. After this process, the signal goes into Pattern_Search and the DEMUX.

The pattern search does not have sub-modules, so it is a little bit longer and complex. This module checks the Header in the frame 4 MSB. First, it verifies whether data are valid or idle. Some constants must be declared at the beginning so as to compare them with the incoming frame, just for control purposes. This is useful in the recognition of the number of incoming clusters.

To ensure the robustness of synchronization, there is a frame lock and a frame tracking routine. During the frame lock, the receptor must be blocked in the least amount of time possible in order to minimize dead time in case there are losses during normal operation. In the frame tracking, the receiver must avoid the reset of block cycle, unless there are plenty of consecutive errors, so that buffers do not get overcharged.

The next module is the DEMUX that is in charge of the opposite task of the MUX and has modules inside of it. The first one is Read_ RX_DP_RAM, which detects the value of the Write_address, confirming that the first valid data were written, and then starts reading the signal. Another intermediate block named RX_DP_RAM is managed by MegaWizard, where the information is reorganized to be sent through a 120-bit parallel port.

The next block is reverse_interleaving, which is in charge of the inverse function of the interleaver. It accommodates the signal without following clock cycles. Because the decoding is just for Standard Mode, it is not implemented in this research. Therefore, the same conditions that were considered for the encoding stage were also applied to the decoding. The next module is the Descrambling, which has four sub-modules divided into 29-bit groups, just like the Scrambling, but carrying out the inverse operation. It reorganizes the information and receives it properly. Similar to the Scrambler, pattern correspondence is achieved between input signals for DC balancing by several Boolean operations. A Verilog code adaptation must be made in this case. The Boolean operations in the De-scrambler are very similar to the Scrambler ones. When the four sub-modules are out-putted, one parallel output must be concatenated for a large Descrambling that includes all the data.

The last block in the reception stage is responsible for confirming data alignment with a Start/End Packet, defined at the beginning and at the end of the packets to be read at 40 MHz All these changes were made to configure operation modes to reach higher data transmission efficiency. The inner codes have fulfilled the requirements for simulations and the formats were proven to function under CERN protocols.

4. CONCLUSIONS

The scintillating fibers are the principal mechanism in the preparation of this article. Furthermore, the modifications presented are applied to data acquisition boards in order to improve event analyses, and the increase in data makes it possible to apply the corresponding statistics. The resolution level will be enlarged because of the improvements that have already been made, and the ones that will take place during the LS2. In the study of elementary particles, the construction of these fibers and their data recuperation methods will help recognition processes.

The modifications of the GBT code between FE and BE represents an improvement in cluster transmission efficiency. The operation mode configuration from Standard Mode to Wide Bus Mode took place in the VHDL routine implementation, establishing a data recollection system for the regions further from the beam-pipe, also known as low occupancy regions, which represent almost 80 % of the experiment acceptance. The application of the Wide Bus Mode is not justified in high occupancy regions due to high radiation levels. Because of the large number of SEUs in data acquisition, it is inevitable to use a control as a part of the frame. Therefore, the Standard Mode is needed in high occupancy regions due to its FEC.

The implementation of this operation mode is highly important to data acquisition due to the amount of data needed to draw conclusions for high energy physics through the statistical processes involved. The high data rate with this new mode recognizes that the storage capacity may suffer significant consequences in the usage of valid and idle data inside the codes. valid data flag allows only useful data storage by ignoring the idle ones. The buffers may dispose of unnecessary information thanks to the application of these programming sub-routines, which permits data transmission efficiency to increase from about 66.6 % to 93.3 % of the frame.

VHDL implementation in CERN acquisition processes is due to its hardware-oriented programming that makes more efficient reception and transmission possible because of its high sampling frequencies. This tool is fundamental to the development of the LHCb upgrade in order to achieve higher robustness.

ACKNOWLEDGEMENTS

The work of Tomás Sierra, Diego Milanés and Carlos Vera took place at LPNHE-LHCb Collaboration. LPNHE is part of the University Pierre et Marie Curie (UPMC). This work was guided by Olivier LeDortz, the engineer in charge of the technical details in the Sci-Fi and supported by professors Eli Ben-Haim and Francesco Polci, who contextualized this experiment into high energy physics. The work of Tomás Sierra was supported by LPNHE-LHCb Collaboration, #14-305-INT, Universidad de Ibagué and Comité Central de Investigaciones - Universidad del Tolima. Diego Milanés acknowledges the support of Universidad Nacional de Colombia and Carlos Vera was supported by Comité Central de Investigaciones - Universidad de Tolima.

REFERENCES

[1] CERN/LHCC, "LHCb Tracker Upgrade Technical Design Report," 2014 [Online]. Available: https://cds.cern.ch/record/1647400/files/LHCB-TDR-015.pdfLinks ]

[2] L. Beaucourt et al., "Évolution de la Contribution Française à Lupgrade de LHCb," 2014 [Online]. Available: https://hal.inria.fr/in2p3-00943386/documentLinks ]

[3] A. Gallas, "The LHCb Upgrade," Physics Procedia, vol. 37, pp. 151-163, 2012. https://doi.org/10.1016/j.phpro.2012.02.364Links ]

[4] S. Brown and Z. Vranesic. Fundamentals of Digital Logic with VHDL Design. Toronto, Canada: McGraw-Hill, 1st ed.,2003, pp. 821-837. [ Links ]

[5] T. L. Floyd. Fundamentos de Sistemas Digitales. Madrid, Spain: Pearson Educación S.A, 9th ed., 2006, pp. 328-386. [ Links ]

[6] K. Wyllie et al., "Electronics Architecture of the LHCb Upgrade," 2013 [Online]. Available: https://cds.cern.ch/record/1340939/files/LH-Cb-PUB-2011-011.pdfLinks ]

[7] L. Evans. The Large Hadron Collider: a Marvel of Technology. Boca Raton, United States: CERN Publications & EPFL Press, 2009, pp. 216-243. [ Links ]

[8] G. Kane. Modern Elementary Particle Physics: The Fundamental Particles and Forces. Michigan, United States: Addison-Wesley Publishing Company, 1993, pp. 15-79. [ Links ]

[9] D. J. Griffiths. Introduction to Elementary Particles. New York, United States: John Wiley and Sons, Inc, 1987, pp. 551-43. [ Links ]

[10] G. Kane, "The Dawn of Physics Beyond the Standard Model," Scientific American, pp. 68-75, 2003 [Online]. Available: http://particle-theory.physics.lsa. umich.edu/kane/Kane5p.pdfLinks ]

[11] S. Baron, M. Barrios Marin, and J. Mendez, "Draft: GBT-FPGA User Guide," 2016. [Online]. Available: https://wiki.to.infn.it/lib/exe/fetch.php?media=ele-ttronica:projects:cms_dt:gbtsystem:-baron-gbtfpgaug.pdfLinks ]

[12] G. Vouters et al., "LHCb Upgrade MiniDAQ HandBook," LHCb Tech. Rep., 2014 [Online]. Available: https://lbredmine.cern.ch/documents/8Links ]

[13] F. J. Gilman, "The Determination of the CKM Matrix," Nuclear Instruments and Methods in Physics Research, A, vol. 462, pp. 301 -303, 2001. [Online]. Available: https://arxiv.org/pdf/hep-ph/0102345.pdfLinks ]

[14] P. Nolan. Fundamentals of Modern Physics. New York, United States: Physics Curriculum and Instruction, Inc, 1st ed., 2014, pp. 279-313. [ Links ]

[15] C. Cohen Tannoudji, B. Diu, and F. Laloe. Quantum Mechanics. New York, United States: John Wiley and Sons, Inc , 1992, vol .1, pp. 96-121. [ Links ]

[16] I. Bigi and A. Sanda. CP Violation. New York, United States: Cambridge University Press, 2nd ed., 2009, pp. 10-55. [ Links ]

[17] G. C. Branco, L. Lavoura, and P. Silva. CP Violation. Oxford, England: Clarendon Press, 1999, pp. 3-49. [ Links ]

[18] S. Gori, "Three Lectures of Flavor and CP Violation Within and Beyond the Standard Model." Department of Physics, University of Cincinnati, 2016 [Online]. Available: https://arxiv.org/abs/1610.02629Links ]

[19] T. Hambye, "CP Violation and the Matter-Antimatter Asymmetry of the Universe," Comptes Rendus Physique, vol. 13, pp. 193-203, 2012. https://doi.org/10.1016/j.crhy.2011.09.007Links ]

[20] F. Muheim, "LHCB Upgrade Plans," Nuclear Physics B (Proceedings Supplements), vol. 170, pp. 317-322, 2007. https://doi.org/10.1016/j.nuclphysbps.2007.05.015Links ]

[21] M. Van Beuzekom et al., "VeloPix ASIC Development for LHCb VELO Upgrade," Nuclear Instruments and Methods in Physics Research A, vol. 731, pp. 92-96, 2013. https://doi.org/10.1016/j.nima.2013.04.016Links ]

[22] P. Collins, "The LHCb VELO (VErtex LOcator) and the LHCb VELO Upgrade," Nuclear Instruments and Methods in Physics Research A, vol. 699, pp. 160-165, 2013. https://doi.org/10.1016/j.nima.2012.03.047Links ]

[23] C. Joram, G. Haefeli, and B. Leverington, "Scintillating Fibre Tracking at High Luminosity Colliders," IOP Science Publishing & Sissa Medialab, 2015 [Online] Available: http://iopscience.iop.org/article/10.1088/1748-0221/10/08/C08005#referencesLinks ]

[24] C. Alfieri and M. Marangoni, "R&D on the LHCb SciFi Tracker: Characterisation of Scintillating Fibres and SiPM Photo-Detectors" M. S. thesis, Industrial Engineering and Informatics Faculty, Physics Engineering, Politecnico di Milano, 2014. [ Links ]

[25] N. Durussel, "Signal Modeling and Verification with a Cosmic Ray Telescope for Scintillating Fibre Tracker in the Context of the LHCb Upgrade," M. S. thesis, École Polytechnique Fédérale, Lausanne, 2013. [ Links ]

[26] Y. Guz, "LHCb Calorimeter Upgrade. Proceedings of CHEF, Calorimetry for High Energy Frontiers," pp. 355-362, 2013 [Online]. Available: https://cds.cern.ch/record/1602198/files/CHEF2013_Yury_Guz.pdfLinks ]

[27] S. Easo. "Upgrade of LHCb-RICH Detectors," Nuclear Instruments and Methods in Physics Research A, vol. 766, pp. 110-113, 2014. https://doi.org/10.1016/j.nima.2014.04.084Links ]

[28] LHCb Public Website, "Detector: Tracking System," 2008 [Online]. Available: http://lhcb-public.web.cern.ch/lhcb-public/en/Detector/Trackers2-en.htmlLinks ]

[29] T. Abajyan et al., "Observation of a New Particle in the Search for the Standard Model Higgs Boson with the ATLAS Detector at the LHC," Physics Letters , B, vol. 716, pp. 1-29, 2012. https://doi.org/10.1016/j.physletb.2012.08.020Links ]

[30] E. Cogneras et al, "The Digitisation of the Scintillating Fibre Detector," LH-Cb-PUB-2014-003, 2014 [Online]. Available: https://cds.cern.ch/record/1641930/files/LHCb-PUB-2014-003.pdfLinks ]

[31] K. J. Ma et al. "Time and Amplitude of Afterpulse Measured with a Large Size Photomultiplier Tube," Nuclear Instruments and Methods A, vol. 629, pp. 93-100, 2009. https://doi.org/10.1016/j.nima.2010.11.095Links ]

[32] ALTERA, "Stratix V GX FPGA Development Board, Reference Manual," 2014 [Online]. Available: https://www.altera.com/content/dam/altera-www/glo-bal/en_US/pdfs/literature/manual/rm_svgx_fpga_dev_board.pdfLinks ]

[33] S. Baron et al., "Implementing GBT Data Transmission Protocol in FPGA's," 2009 [Online]. Available: https://cds.cern.ch/record/1236361/files/p631.pdfLinks ]

[34] F. Alessio, P. Yves Duval, and G. Vouters, "Draft: LHCb Upgrade GIT Repository for AMC40 Firmware," LCHb Tech. Rep., 2016 [Online]. Available: https://lbredmine.cern.ch/documents/6Links ]

[35] P. Govoni, "The Computing Grids," Nuclear Physics, B (Proceedings Supplements), vol. 197, pp. 346-348, 2009. https://doi.org/10.1016/j.nuclphysbps.2009.10.100Links ]

[36] P. Moreira, J. Christiansen, and K. Wyllie, "Draft: GBT Manual," Version 0.6, 2015 [Online]. Available: https://es.scribd.com/document/314941295/gbtx-ManualLinks ]

[37] F. Alessio and R. Jacobsson, "System-level Specifications of the Timing and Fast Control System for the LHCb Upgrade", 2011. [Online]. Available: http://cds.cern.ch/record/1424363/files/LHCb-PUB-2012-001.pdf?version=5Links ]

[38] G. Vouters et al., "Front-End and BackEnd Data Format of the LHCb Upgrade," Revision 4.1., 2015 [Online]. Available: https://lbredmine.cern.ch/documents/7Links ]

Cómo citar: T. Sierra Polanco, D. Milanés, C. Vera, "Configuración de los modos de operación para un subdetec-tor de fibras centelleantes en el experimento LHCb", Ciencia e Ingeniería Neogranadina, vol. 28, no. 2, pp. 43-62. DOI: https://doi.org/10.18359/rcin.2854

Received: April 26, 2017; Revised: February 08, 2018; Accepted: March 23, 2018

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License