2,429 research outputs found

    Deep Learning fast inference on FPGA for CMS Muon Level-1 Trigger studies

    Get PDF
    With the advent of the High-Luminosity phase of the LHC (HL-LHC), the instantaneous luminosity of the Large Hadron Collider at CERN is expected to increase up to ≈7.5⋅1034cm−2s−1. Therefore, new strategies for data acquisition and processing will be necessary, in preparation for the higher number of signals produced inside the detectors. In the context of an upgrade of the trigger system of the Compact Muon Solenoid (CMS), new reconstruction algorithms, aiming for an improved performance, are being developed. For what concerns the online tracking of muons, one of the figures that is being improved is the accuracy of the transverse momentum (pT) measurement. Machine Learning techniques have already been considered as a promising solution for this problem, as they make possible, with the use of more information collected by the detector, to build models able to predict with an improved precision the pT. This work aims to implement such models onto an FPGA, which promises smaller latency with respect to traditional inference algorithms running on CPU, an important aspect for a trigger system. The analysis carried out in this work will use data obtained through Monte Carlo simulations of muons crossing the barrel region of the CMS muon chambers, and compare the results with the pT assigned by the current CMS Level 1 Barrel Muon Track Finder (BMTF) trigger system

    TOR: modular search with hookable disjunction

    Get PDF
    Horn Clause Programs have a natural exhaustive depth-first procedural semantics. However, for many programs this semantics is ineffective. In order to compute useful solutions, one needs the ability to modify the search method that explores the alternative execution branches. Tor, a well-defined hook into Prolog disjunction, provides this ability. It is light-weight thanks to its library approach and efficient because it is based on program transformation. Tor is general enough to mimic search-modifying predicates like ECLiPSe's search/6. Moreover, Tor supports modular composition of search methods and other hooks. The Tor library is already provided and used as an add-on to SWI-Prolog.publisher: Elsevier articletitle: Tor: Modular search with hookable disjunction journaltitle: Science of Computer Programming articlelink: http://dx.doi.org/10.1016/j.scico.2013.05.008 content_type: article copyright: Copyright © 2013 Elsevier B.V. All rights reserved.status: publishe

    Multi-objective optimisation of safety-critical hierarchical systems

    Get PDF
    Achieving high reliability, particularly in safety critical systems, is an important and often mandatory requirement. At the same time costs should be kept as low as possible. Finding an optimum balance between maximising a system's reliability and minimising its cost is a hard combinatorial problem. As the size and complexity of a system increases, so does the scale of the problem faced by the designers. To address these difficulties, meta-heuristics such as Genetic Algorithms and Tabu Search algorithms have been applied in the past for automatically determining the optimal allocation of redundancies in a system as a mechanism for optimising the reliability and cost characteristics of that system. In all cases, simple reliability block diagrams with restrictive assumptions, such as failure independence and limited 2-state failure modes, were used for evaluating the reliability of the candidate designs produced by the various algorithms.This thesis argues that a departure from this restrictive evaluation model is possible by using a new model-based reliability evaluation technique called Hierachically Performed Hazard Origin and Propagation Studies (HiP-HOPS). HiP-HOPS can overcome the limitations imposed by reliability block diagrams by providing automatic analysis of complex engineering models with multiple failure modes. The thesis demonstrates that, used as the fitness evaluating component of a multi-objective Genetic Algorithm, HiP-HOPS can be used to solve the problem of redundancy allocation effectively and with relative efficiency. Furthermore, the ability of HiP-HOPS to model and automatically analyse complex engineering models, with multiple failure modes, allows the Genetic Algorithm to potentially optimise systems using more flexible strategies, not just series-parallel. The results of this thesis show the feasibility of the approach and point to a number of directions for future work to consider

    Designing Approximate Computing Circuits with Scalable and Systematic Data-Driven Techniques

    Get PDF
    Semiconductor feature size has been shrinking significantly in the past decades. This decreasing trend of feature size leads to faster processing speed as well as lower area and power consumption. Among these attributes, power consumption has emerged as the primary concern in the design of integrated circuits in recent years due to the rapid increasing demand of energy efficient Internet of Things (IoT) devices. As a result, low power design approaches for digital circuits have become of great attractive in the past few years. To this end, approximate computing in hardware design has emerged as a promising design technique. It provides design opportunities to improve timing and energy efficiency by relaxing computing quality. This technique is feasible because of the error-resiliency of many emerging resource-hungry computational applications such as multimedia processing and machine learning. Thus, it is reasonable to utilize this characteristic to trade an acceptable amount of computing quality for energy saving. In the literature, most prior works on approximate circuit design focus on using manual design strategies to redesign fundamental computational blocks such as adders and multipliers. However, the manual design techniques are not suitable for system level hardware due to much higher design complexity. In order to tackle this challenge, we focus on designing scalable, systematic and general design methodologies that are applicable on any circuits. In this paper, we present two novel approximate circuit design methods based on machine learning techniques. Both methods skip the complicated manual analysis steps and primarily look at the given input-error pattern to generate approximate circuits. Our first work presents a framework for designing compensation block, an essential component in many approximate circuits, based on feature selection. Our second work further extends and optimizes this framework and integrates data-driven consideration into the design. Several case studies on fixed-width multipliers and other approximate circuits are presented to demonstrate the effectiveness of the proposed design methods. The experimental results show that both of the proposed methods are able to automatically and efficiently design low-error approximate circuits
    • …
    corecore