1,489 research outputs found

    Massively Parallel Computing and the Search for Jets and Black Holes at the LHC

    Full text link
    Massively parallel computing at the LHC could be the next leap necessary to reach an era of new discoveries at the LHC after the Higgs discovery. Scientific computing is a critical component of the LHC experiment, including operation, trigger, LHC computing GRID, simulation, and analysis. One way to improve the physics reach of the LHC is to take advantage of the flexibility of the trigger system by integrating coprocessors based on Graphics Processing Units (GPUs) or the Many Integrated Core (MIC) architecture into its server farm. This cutting edge technology provides not only the means to accelerate existing algorithms, but also the opportunity to develop new algorithms that select events in the trigger that previously would have evaded detection. In this article we describe new algorithms that would allow to select in the trigger new topological signatures that include non-prompt jet and black hole--like objects in the silicon tracker.Comment: 15 pages, 11 figures, submitted to NIM

    Massively Parallel Computing at the Large Hadron Collider up to the HL-LHC

    Full text link
    As the Large Hadron Collider (LHC) continues its upward progression in energy and luminosity towards the planned High-Luminosity LHC (HL-LHC) in 2025, the challenges of the experiments in processing increasingly complex events will also continue to increase. Improvements in computing technologies and algorithms will be a key part of the advances necessary to meet this challenge. Parallel computing techniques, especially those using massively parallel computing (MPC), promise to be a significant part of this effort. In these proceedings, we discuss these algorithms in the specific context of a particularly important problem: the reconstruction of charged particle tracks in the trigger algorithms in an experiment, in which high computing performance is critical for executing the track reconstruction in the available time. We discuss some areas where parallel computing has already shown benefits to the LHC experiments, and also demonstrate how a MPC-based trigger at the CMS experiment could not only improve performance, but also extend the reach of the CMS trigger system to capture events which are currently not practical to reconstruct at the trigger level.Comment: 14 pages, 6 figures. Proceedings of 2nd International Summer School on Intelligent Signal Processing for Frontier Research and Industry (INFIERI2014), to appear in JINST. Revised version in response to referee comment

    First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC

    Full text link
    Recent innovations focused around {\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \xphi\ 7120 coprocessor. Preliminary time performance will be presented.Comment: 13 pages, 4 figures, Accepted to JINS

    Online Pattern Recognition for the ALICE High Level Trigger

    Full text link
    The ALICE High Level Trigger has to process data online, in order to select interesting (sub)events, or to compress data efficiently by modeling techniques.Focusing on the main data source, the Time Projection Chamber (TPC), we present two pattern recognition methods under investigation: a sequential approach "cluster finder" and "track follower") and an iterative approach ("track candidate finder" and "cluster deconvoluter"). We show, that the former is suited for pp and low multiplicity PbPb collisions, whereas the latter might be applicable for high multiplicity PbPb collisions, if it turns out, that more than 8000 charged particles would have to be reconstructed inside the TPC. Based on the developed tracking schemes we show, that using modeling techniques a compression factor of around 10 might be achievableComment: Realtime Conference 2003, Montreal, Canada to be published in IEEE Transactions on Nuclear Science (TNS), 6 pages, 8 figure

    A method to search for long duration gravitational wave transients from isolated neutron stars using the generalized FrequencyHough

    Full text link
    We describe a method to detect gravitational waves lasting O(hours−days)O(hours-days) emitted by young, isolated neutron stars, such as those that could form after a supernova or a binary neutron star merger, using advanced LIGO/Virgo data. The method is based on a generalization of the FrequencyHough (FH), a pipeline that performs hierarchical searches for continuous gravitational waves by mapping points in the time/frequency plane of the detector to lines in the frequency/spindown plane of the source. We show that signals whose spindowns are related to their frequencies by a power law can be transformed to coordinates where the behavior of these signals is always linear, and can therefore be searched for by the FH. We estimate the sensitivity of our search across different braking indices, and describe the portion of the parameter space we could explore in a search using varying fast Fourier Transform (FFT) lengths.Comment: 15 figure

    A Flexible Image Processing Framework for Vision-based Navigation Using Monocular Image Sensors

    Get PDF
    On-Orbit Servicing (OOS) encompasses all operations related to servicing satellites and performing other work on-orbit, such as reduction of space debris. Servicing satellites includes repairs, refueling, attitude control and other tasks, which may be needed to put a failed satellite back into working condition. A servicing satellite requires accurate position and orientation (pose) information about the target spacecraft. A large quantity of different sensor families is available to accommodate this need. However, when it comes to minimizing mass, space and power required for a sensor system, mostly monocular imaging sensors perform very well. A disadvantage is- when comparing to LIDAR sensors- that costly computations are needed to process the data of the sensor. The method presented in this paper is addressing these problems by aiming to implement three different design principles; First: keep the computational burden as low as possible. Second: utilize different algorithms and choose among them, depending on the situation, to retrieve the most stable results. Third: Stay modular and flexible. The software is designed primarily for utilization in On-Orbit Servicing tasks, where- for example- a servicer spacecraft approaches an uncooperative client spacecraft, which can not aid in the process in any way as it is assumed to be completely passive. Image processing is used for navigating to the client spacecraft. In this specific scenario, it is vital to obtain accurate distance and bearing information until, in the last few meters, all six degrees of freedom are needed to be known. The smaller the distance between the spacecraft, the more accurate pose estimates are required. The algorithms used here are tested and optimized on a sophisticated Rendezvous and Docking Simulation facility (European Proximity Operations Simulator- EPOS 2.0) in its second-generation form located at the German Space Operations Center (GSOC) in Weßling, Germany. This particular simulation environment is real-time capable and provides an interface to test sensor system hardware in closed loop configuration. The results from these tests are summarized in the paper as well. Finally, an outlook on future work is given, with the intention of providing some long-term goals as the paper is presenting a snapshot of ongoing, by far not yet completed work. Moreover, it serves as an overview of additions which can improve the presented method further

    Studies of the CMS Level 1 Trigger and Tracker Upgrade

    Get PDF

    Novel Methodologies for Pattern Recognition of Charged Particle Trajectories in the ATLAS Detector

    Get PDF
    By 2029, the Large Hadron Collider will enter its High Luminosity phase (HL- LHC) in order to achieve an unprecedented capacity for discovery. As this phase is entered, it is essential for many physics analyses that the efficiency of the re- construction of charged particle trajectories in the ATLAS detector is maintained. With levels of pile-up expected to reach = 200, the number of track candidates that must be processed will increase exponentially in the current pattern matching regime. In this thesis, a novel method for charged particle pattern recognition is developed based on the popular computer vision technique known as the Hough Transform (HT). Our method differs from previous attempts to use the HT for tracking in its data-driven choice of track parameterisation using Principal Component Analysis (PCA), and the division of the detector space in to very narrow tunnels known as sectors. This results in well-separated Hough images across the layers of the detector and relatively little noise from pile-up. Additionally, we show that the memory requirements for a pattern-based track finding algorithm can be reduced by approximately a factor of 5 through a two-stage compression process, without sacrificing any significant track finding efficiency. The new tracking algorithm is compared with an existing pattern matching algorithm, which consists of matching detector hits to a collection of pre-defined patterns of hits generated from simulated muon tracks. The performance of our algorithm is shown to achieve similar track finding efficiency while reducing the number of track candidates per event

    An FPGA-based track finder for the L1 trigger of the CMS experiment at the high luminosity LHC

    Get PDF
    A new tracking system is under development for operation in the CMS experiment at the High Luminosity LHC. It includes an outer tracker which will construct stubs, built by correlating clusters in two closely spaced sensor layers for the rejection of hits from low transverse momentum tracks, and transmit them off-detector at 40 MHz. If tracker data is to contribute to keeping the Level-1 trigger rate at around 750 kHz under increased luminosity, a crucial component of the upgrade will be the ability to identify tracks with transverse momentum above 3 GeV/c by building tracks out of stubs. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are identified using a projective binning algorithm based on the Hough Transform. A hardware system based on the MP7 MicroTCA processing card has been assembled, demonstrating a realistic slice of the track finder in order to help gauge the performance and requirements for a full system. This paper outlines the system architecture and algorithms employed, highlighting some of the first results from the hardware demonstrator and discusses the prospects and performance of the completed track finder
    • 

    corecore