49 research outputs found

    A FPGA-based architecture for real-time cluster finding in the LHCb silicon pixel detector

    Get PDF
    The data acquisition system of the LHCb experiment has been substantially upgraded for the LHC Run 3, with the unprecedented capability of reading out and fully reconstructing all proton–proton collisions in real time, occurring with an average rate of 30 MHz, for a total data flow of approximately 32 Tb/s. The high demand of computing power required by this task has motivated a transition to a hybrid heterogeneous computing architecture, where a farm of graphics cores, GPUs, is used in addition to general–purpose processors, CPUs, to speed up the execution of reconstruction algorithms. In a continuing effort to improve real–time processing capabilities of this new DAQ system, also with a view to further luminosity increases in the future, low–level, highly–parallelizable tasks are increasingly being addressed at the earliest stages of the data acquisition chain, using special–purpose computing accelerators. A promising solution is offered by custom–programmable FPGA devices, that are well suited to perform high–volume computations with high throughput and degree of parallelism, limited power consumption and latency. In this context, a two–dimensional FPGA–friendly cluster–finder algorithm has been developed to reconstruct hit positions in the new vertex pixel detector (VELO) of the LHCb Upgrade experiment. The associated firmware architecture, implemented in VHDL language, has been integrated within the VELO readout, without the need for extra cards, as a further enhancement of the DAQ system. This pre–processing allows the first level of the software trigger to accept a 11% higher rate of events, as the ready– made hit coordinates accelerate the track reconstruction, while leading to a drop in electrical power consumption, as the FPGA implementation requires O(50x) less power than the GPU one. The tracking performance of this novel system, being indistinguishable from a full–fledged software implementation, allows the raw pixel data to be dropped immediately at the readout level, yielding the additional benefit of a 14% reduction in data flow. The clustering architecture has been commissioned during the start of LHCb Run 3 and it currently runs in real time during physics data taking, reconstructing VELO hit coordinates on–the–fly at the LHC collision rate

    Towards Real-Time Anomaly Detection within X-ray Security Imagery: Self-Supervised Adversarial Training Approach

    Get PDF
    Automatic threat detection is an increasingly important area in X-ray security imaging since it is critical to aid screening operators to identify concealed threats. Due to the cluttered and occluded nature of X-ray baggage imagery and limited dataset availability, few studies in the literature have systematically evaluated the automated X-ray security screening. This thesis provides an exhaustive evaluation of the use of deep Convolutional Neural Networks (CNN) for the image classification and detection problems posed within the field. The use of transfer learning overcomes the limited availability of the object of interest data examples. A thorough evaluation reveals the superiority of the CNN features over conventional hand-crafted features. Further experimentation also demonstrates the capability of the supervised deep object detection techniques as object localization strategies within cluttered X-ray security imagery. By addressing the limitations of the current X-ray datasets such as annotation and class-imbalance, the thesis subsequently transitions the scope to- wards deep unsupervised techniques for the detection of anomalies based on the training on normal (benign) X-ray samples only. The proposed anomaly detection models within the thesis employ a conditional encoder-decoder generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space — minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution — an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches. Based on the current approaches and open problems in deep learning, the thesis finally provides discussion and future directions for X-ray security imagery

    Systems genomics analysis of complex cognitive traits

    Get PDF
    The study of the genetic underpinnings of human cognitive traits is deemed an important tool to increase our understanding of molecular processes related to physiological and pathological cognitive functioning. The polygenic architecture of such complex traits implies that multiple naturally occurring genetic variations, each of small effect size, are likely to influence jointly the biological processes underlying cognitive ability. Genetic association results are yet devoid of biological context, thus limiting both the identification and functional interpretation of susceptibility variants. This biological gap can be reduced by the integrative analysis of intermediate molecular traits, as mediators of genomic action. In this thesis, I present results from two such systems genomics analyses, as attempts to identify molecular patterns underlying cognitive trait variability. In the first study, we adopted a system-level approach to investigate the relationship between global age-related patterns of epigenetic variation and cortical thickness, a brain morphometric measure that is linked to cognitive functioning. The integration of both genome-wide methylomic and genetic profiles allowed the identification of a peripheral molecular signature that showed association with both cortical thickness and episodic memory performance. In the second study, we explicitly modeled the interdependencies between local genetic markers and peripherally measured epigenetic variations. We thus generated robust estimators of epigenetic regulation and showed that these estimators resulted in the identification of epigenetic underpinnings of schizophrenia, a common genetically complex disorder. These results underscore the potential of systems genomics approaches, capitalizing on the integration of high-dimensional multi-layered molecular data, for the study of brain- related complex traits

    Clustering techniques for base station coordination in a wireless cellular system

    Get PDF
    A lo largo de este Proyecto Fin de Carrera, propondremos mejoras para futuros sistemas de comunicaciones móviles mediante un estudio detallado de la coordinación entre estaciones base en sistemas celulares basados en MIMO. Este proyecto se compone de dos partes fundamentales. Por un lado, nos centraremos en técnicas de procesado de señal para MIMO como filtrado y precodificación lineales en el dominio espacial. Partiendo de los últimos desarrollos en dicho ámbito, se han desarrollado precodificadores de mínimo error cuadrático medio que incluyen restricciones de máxima potencia transmitida por celda. Además, se ha propuesto un concepto novedoso consistente en la introducción de una nueva formulación que, además de minimizar el error cuadrático medio en el interior de cada agrupación de celdas (cluster ), trata de mantener la interferencia entre clusters en niveles suficientemente bajos. Durante la segunda parte, analizaremos el impacto que la agrupación de celdas en clusters, que define qué estaciones base pueden ser coordinadas entre sí , tiene en el rendimiento global del sistema. Se ha estudiado la aplicabilidad de técnicas de agrupamiento dentro del aprendizaje máquina, dando como resultado un conjunto de nuevos algoritmos que han sido desarrollados adaptando algoritmos de agrupamiento de propósito general ya existentes al problema de crear una partición del conjunto de celdas de acuerdo a las condiciones de propagación de señal existentes en el sistema en un determinado instante. Todas nuestras contribuciones se han verificado mediante la simulación de un sistema de comunicaciones móviles basado en modelos de propagación de señal del 3GPP para LTE. De acuerdo a los resultados obtenidos, las técnicas propuestas a lo largo de este proyecto proporcionan un aumento considerable de la media y la mediana de las tasas por usuario respecto a soluciones ya existentes. La idea de introducir la reducción de interferencia entre clusters en la formulación de los precodifiadores MMSE mejora dramáticamente el rendimiento en sistemas celulares MIMO al ser comparados con precodifiadores de Wiener tradicionales. Por otro lado, nuestros algoritmos de agrupamiento dinámico de estaciones base exhiben un notable aumento de las tasas por usuario a la vez que emplean clusters de menor tamaño con respecto a soluciones existentes basadas en particiones estáticas del conjunto de celdas en el sistema. _______________________________________________________________________________________________________________________________In this project, we attempt to provide enhancements for future mobile communications systems by carrying out a throughout study of base-station coordination in cellular MIMO systems. Our work can be divided in two main blocks. During the first part, we focus our attention on linear MIMO signal processing techniques such as linear spatial precoding and linear spatial ltering. Starting from the state-of-the-art in that area of knowledge, we have developed novel MMSE precoders which include per-cell power constraints and a new formulation which, apart from minimizing the intra-cluster MSE, tries to keep inter-cluster interference at low levels. In the second part, we focus on the study of the impact the particular mapping of cells to clusters in the cellular system has on the overall performance of the mobile communication radio access network. The applicability of existing clustering algorithms in the fi eld of machine learning has been studied, resulting in a set of novel algorithms that we developed by adapting existing general-purpose clustering solutions for the problem of dynamically partitioning a set of cells according to the instantaneous signal propagation conditions. All our contributions have been exhaustively tested by simulation of a cellular mobile communication system based on 3GPP signal propagation models for LTE. According to the results obtained, the techniques proposed along this project provide a remarkable increase of both the average and median user rates in the system with respect to previous existing solutions. The inter-cluster interference-awareness we introduced in the formulation of MMSE precoders dramatically increases the performance in cellular coordinated MIMO when comparing it with traditional Wiener precoders. On the other hand, our dynamic base-station clustering has been shown to signi catively enhance the user rates while using smaller clusters that existing solutions based on static partitions of the base-station deployment.Ingeniería de Telecomunicació

    A Triplet Track Trigger for Future High Rate Collider Experiments

    Get PDF
    The Large Hadron Collider (LHC) will have a major upgrade, called the High Luminosity LHC (HL-LHC), after which the proton beams will collide with around 7 times the design luminosity of the LHC (L = 10^34 cm^-2 s^-1). There are also studies being conducted for a 100 km large circular hadron collider, called the hadron-hadron Future Circular Collider (FCC-hh), for the post LHC era. It aims to collide proton beams with sqrt{s} = 100 TeV and L ∼30 x 10^34 cm^-2 s-^1. High luminosities allow for a detailed study of elusive processes, for example, Higgs pair production, thus enabling direct measurement of the trilinear Higgs self-coupling (λ). In this regard, a generator level study is presented in the thesis using the HH->bbbb physics channel assuming trigger-less readout at FCC-hh. An average pile-up of〈μ〉~1000 (200) is expected at the FCC-hh (HL-LHC) within a vertex region of ∼10 cm. A vast pile-up complicates object reconstruction and forces trigger systems to increase the thresholds of trigger objects to satisfy bandwidth and storage limitations of an experiment. Hence, there is a need for a trigger that makes a smart selection of hard collision events from a sea of pile-up collisions at the earliest possible stage of a trigger system. Track triggers are attractive candidates for such demanding situations as they have a very good pointing resolution (unlike calorimeter triggers) in addition to a good momentum resolution. A new concept, the Triplet Track Trigger (TTT) is proposed to be used at the very first trigger level. It consists of three closely spaced highly granular pixel (preferably monolithic sensors) detector layers at large radii (∼1 m). It uses a very simple and fast track reconstruction algorithm, that can be easily implemented in hardware. TTT tracking performance studies are presented using full Geant4 simulation and reconstruction for the ATLAS Inner Tracker (at HL-LHC) and reference tracker of the FCC-hh. Very good momentum and z-vertex resolution allow grouping of TTT tracks into several bins along the beam-axis, where jet clustering algorithms run in parallel to form TTT-jets. The TTT allows for excellent pile-up suppression for the HH->bbbb multi-jet signature in〈μ〉= 1000 conditions of FCC-hh. A rate reduction from the 40 MHz bunch collision frequency to 1 MHz (4 MHz) is achieved for a trigger efficiency of ∼60% (80%). A corresponding rough estimate on S/sqrt{B} ∼16 (19) is obtained with negligible systematic uncertainties and total integrated luminosity of 30 ab^-1

    Search for the production of a single excited b quark in the Wt final state with a single lepton in pp collisions at sqrt s = 13 TeV with the ATLAS detector

    Get PDF
    In dieser Dissertation wird eine Suche nach einem angeregten bb Quark, b∗b^*, durchgeführt. Für diese b∗b^* wird eine anomale Kopplung an Bosonen aus dem Standard Modell vorhergesagt. Diese Kopplung führt zu der Produktion in hochenergetischen Proton-Proton Kollisionen. In der Suche zielen wir auf Ereignisse, bei denen eines der zwei WW Bosonen in ein Elektron oder Muon zerfällt und das andere in ein Hadronenpaar. Da es nur ein Neutrino in diesem Prozess gibt, kann das ganze Ereigniss kinematisch rekonstruiert werden. Damit ist es möglich, die Masse des b∗b^* als diskriminierende Variable zu verwenden. Als Datenquelle dienen die Daten, die der ATLAS Detektor in den Jahren 2015 und 2016 bei Proton-Proton Kollisionen mit einer Schwerpunksenergie von \sqrt s = \SI{13}{\tera\electronvolt} aufgezeichnet hat. Dabei entspricht diese Datenmenge einer integrierten Luminosität von \L_{int} = \SI{36.1}{\femto\barn^{-1}}. Da die Analyse auf hochmassige b∗b^* abziehlt, kann man davon ausgehen, dass das Hadronenpaar aus dem WW Zerfall in einen Jet mit großen Radius passt. Es wurde kein signifikanter Überschuss über den Untergrund gefunden. Damit können nur obere Ausschlussgrenzen bezüglich der Wirkungsquerschnitte in dem entsprechenden Zerfallskanal abgeleitet werden. Unter Annahme von einer Kopplungskonstanten von 1, sind Zerfälle von b∗→Wtb^*\to Wt bis zu einer Masse m_{b^*,\mathrm{obs}}= \SI{2.5}{\tera\electronvolt} ausgeschlossen, wobei erwartet wurde, dass die Ausschlussmasse bei m_{b^*,\mathrm{exp}}= \SI{2.4}{\tera\electronvolt} liegen würde.A search for an excited bb quark, b∗b^*, in events containing a top quark and a WW boson is investigated. These b∗b^* are predicted to have some anomalous couplings to Standard Model bosons aiding the production in high energy proton-proton collisions. The search is aiming for events, where one of the two WW bosons decays into an electron or muon, while the other decays hadronically. With only one neutrino, the event can be kinematically fully reconstructed. This enables the use of the mass of the b∗b^* as the discriminant variable. The data source under investigation is the data taken by the ATLAS detector at the LHC accelerator in the years 2015 and 2016 at a center-of-mass energy of \sqrt s = \SI{13}{\tera\electronvolt}. The combined dataset corresponds to an integrated luminosity of \L_{int} = \SI{36.1}{\femto\barn^{-1}}. The analysis targets high mass excited b∗b^* quarks, where the products of the hadronically decaying WW are contained within a large-radius jet. No significant excess over the expected background is observed and upper limits on the cross-section times branching ratio and coupling limits are derived. Assuming unit coupling, b∗b^* decaying into WtWt are excluded up to m_{b^*,\mathrm{obs}}= \SI{2.5}{\tera\electronvolt}, with an expected exclusion limit of m_{b^*,\mathrm{exp}}= \SI{2.4}{\tera\electronvolt}
    corecore