236 research outputs found

    Report of the Topical Group on Micro-Pattern Gaseous Detectors for Snowmass 2021

    Full text link
    This report summarizes white papers on micro-pattern gaseous detectors (MPGDs) that were submitted to the Instrumentation Frontier Topical Group IF05, as part of the Snowmass 2021 decadal survey of particle physics.Comment: contribution to Snowmass 202

    A neural network z-vertex trigger for Belle II

    Full text link
    We present the concept of a track trigger for the Belle II experiment, based on a neural network approach, that is able to reconstruct the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger will thus be able to suppress a large fraction of the dominating background from events outside of the interaction region. The trigger uses the drift time information of the hits from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (sectors), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D (rφr - \varphi) track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track in a given event. Within each sector, the z-vertex of the associated track is estimated by a specialized neural network, with a continuous output corresponding to the scaled z-vertex. The input values for the neural network are calculated from the wire hits of the CDC.Comment: Proceedings of the 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT), Preprint, reviewed version (only minor corrections

    Real-Time Trigger and online Data Reduction based on Machine Learning Methods for Particle Detector Technology

    Get PDF
    Moderne Teilchenbeschleuniger-Experimente generieren während zur Laufzeit immense Datenmengen. Die gesamte erzeugte Datenmenge abzuspeichern, überschreitet hierbei schnell das verfügbare Budget für die Infrastruktur zur Datenauslese. Dieses Problem wird üblicherweise durch eine Kombination von Trigger- und Datenreduktionsmechanismen adressiert. Beide Mechanismen werden dabei so nahe wie möglich an den Detektoren platziert um die gewünschte Reduktion der ausgehenden Datenraten so frühzeitig wie möglich zu ermöglichen. In solchen Systeme traditionell genutzte Verfahren haben währenddessen ihre Mühe damit eine effiziente Reduktion in modernen Experimenten zu erzielen. Die Gründe dafür liegen zum Teil in den komplexen Verteilungen der auftretenden Untergrund Ereignissen. Diese Situation wird bei der Entwicklung der Detektorauslese durch die vorab unbekannten Eigenschaften des Beschleunigers und Detektors während des Betriebs unter hoher Luminosität verstärkt. Aus diesem Grund wird eine robuste und flexible algorithmische Alternative benötigt, welche von Verfahren aus dem maschinellen Lernen bereitgestellt werden kann. Da solche Trigger- und Datenreduktion-Systeme unter erschwerten Bedingungen wie engem Latenz-Budget, einer großen Anzahl zu nutzender Verbindungen zur Datenübertragung und allgemeinen Echtzeitanforderungen betrieben werden müssen, werden oft FPGAs als technologische Basis für die Umsetzung genutzt. Innerhalb dieser Arbeit wurden mehrere Ansätze auf Basis von FPGAs entwickelt und umgesetzt, welche die vorherrschenden Problemstellungen für das Belle II Experiment adressieren. Diese Ansätze werden über diese Arbeit hinweg vorgestellt und diskutiert werden

    Real-time Graph Building on FPGAs for Machine Learning Trigger Applications in Particle Physics

    Full text link
    We present a design methodology that enables the semi-automatic generation of a hardware-accelerated graph building architectures for locally constrained graphs based on formally described detector definitions. In addition, we define a similarity measure in order to compare our locally constrained graph building approaches with commonly used k-nearest neighbour building approaches. To demonstrate the feasibility of our solution for particle physics applications, we implemented a real-time graph building approach in a case study for the Belle~II central drift chamber using Field-Programmable Gate Arrays~(FPGAs). Our presented solution adheres to all throughput and latency constraints currently present in the hardware-based trigger of the Belle~II experiment. We achieve constant time complexity at the expense of linear space complexity and thus prove that our automated methodology generates online graph building designs suitable for a wide range of particle physics applications. By enabling an hardware-accelerated pre-processing of graphs, we enable the deployment of novel Graph Neural Networks~(GNNs) in first level triggers of particle physics experiments.Comment: 18 page

    Efficient physics signal selectors for the first trigger level of the Belle II experiment based on machine learning

    Get PDF
    A neural network based z-vertex trigger is developed for the first level trigger of the upgraded flavor physics experiment Belle II at the high luminosity B factory SuperKEKB in Tsukuba, Japan. Using the hit and drift time information from the central drift chamber, a pool of expert neural networks estimates the 3D track parameters of the single tracks found by a 2D Hough finder. The neural networks are already implemented on parallel FPGA hardware for real time data processing and running pipelined in the online first level trigger of Belle II. Due to the anticipated high luminosity of up to 8 × 10³⁵ cm⁻²s⁻¹, Belle II will have to face severe levels of background tracks with vertices displaced along the beamline. The neural z-vertex algorithm presented in this thesis allows to reject displaced background tracks such that the requirements of the standard track trigger can be strongly relaxed. Especially for physics decay channels with a low track multiplicity in the final states, like τ pair production, or initial state radiation events with reduced center of mass energies, the trigger efficiencies can be significantly increased. As an upgrade of the present 2D Hough finder in the neural network preprocessing, a model independent 3D track finder is developed that uses the additional stereo hit information of the drift chamber. Thus, the trigger efficiencies improve for tracks in the phase space of low transverse momenta and shallow polar angles. Since the cross sections of the physics signal events typically increase towards shallow polar angles, this enlarged acceptance of the track trigger provides a substantial gain in the signal efficiencies. By using an adapted pool of expert networks, the enlarged phase space provided by the 3D finder can be efficiently covered. Studies on simulated MC background, on simulated initial state radiation events, and on recorded data from early Belle II runs demonstrate the high performance of the novel trigger algorithms. With the 3D finder an increase of the track finding rate of about 50 % is confirmed for signal tracks, while displaced background tracks are actively suppressed prior to the neural network. Based on z-vertex cuts on the tracks processed by the neural networks, a two track event efficiency of more than 99 % can be achieved with a purity of around 80 %

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Testing Lepton Flavor Universality and CKM Unitarity with Rare Pion Decays in the PIONEER experiment

    Full text link
    The physics motivation and the conceptual design of the PIONEER experiment, a next-generation rare pion decay experiment testing lepton flavor universality and CKM unitarity, are described. Phase I of the PIONEER experiment, which was proposed and approved at Paul Scherrer Institut, aims at measuring the charged-pion branching ratio to electrons vs. muons, Re/μ, 15 times more precisely than the current experimental result, reaching the precision of the Standard Model (SM) prediction at 1 part in 104. Considering several inconsistencies between the SM predictions and data pointing towards the potential violation of lepton flavor universality, the PIONEER experiment will probe non-SM explanations of these anomalies through sensitivity to quantum effects of new particles up to the PeV mass scale. The later phases of the PIONEER experiment aim at improving the experimental precision of the branching ratio of pion beta decay (BRPB), π+ → π 0e+ ν(γ), currently at 1.036(6) × 10−8, by a factor of three (Phase II) and an order of magnitude (Phase III). Such precise measurements of BRPB will allow for tests of CKM unitarity in light of the Cabibbo Angle Anomaly and the theoretically cleanest extraction of |Vud | at the 0.02% level, comparable to the deduction from superallowed beta decays
    corecore