37 research outputs found

    Clustering of B -> D(*)tau- nu(tau) kinematic distributions with ClusterKinG

    Get PDF
    New Physics can manifest itself in kinematic distributions of particle decays. The parameter space defining the shape of such distributions can be large which is chalenging for both theoretical and experimental studies. Using clustering algorithms, the parameter space can however be dissected into subsets (clusters) which correspond to similar kinematic distributions. Clusters can then be represented by benchmark points, which allow for less involved studies and a concise presentation of the results. We demonstrate this concept using the Python package ClusterKinG, an easy to use framework for the clustering of distributions that particularly aims to make these techniques more accessible in a High Energy Physics context. As an example we consider B over bar -> D) (tau-nu over bar tau distributions and discuss various clustering methods and possible implications for future experimental analyses

    An Object Condensation Pipeline for Charged Particle Tracking at the High Luminosity LHC

    Get PDF
    Recent work has demonstrated that graph neural networks (GNNs) trained for charged particle tracking can match the performance of traditional algorithms while improving scalability to prepare for the High Luminosity LHC experiment. Most approaches are based on the edge classification (EC) paradigm, wherein tracker hits are connected by edges, and a GNN is trained to prune edges, resulting in a collection of connected components representing tracks. These connected components are usually collected by a clustering algorithm and the resulting hit clusters are passed to downstream modules that may assess track quality or fit track parameters. In this work, we consider an alternative approach based on object condensation (OC), a multi-objective learning framework designed to cluster points belonging to an arbitrary number of objects, in this context tracks, and regress the properties of each object. We demonstrate that OC shows very promising results when applied to the pixel detector of the trackML dataset and can, in some cases, recover tracks that are not reconstructable when relying on the output of an EC alone. The results have been obtained with a modular and extensible open-source implementation that allows us to efficiently train and evaluate the performance of various OC architectures and related approaches

    High Pileup Particle Tracking with Object Condensation

    Full text link
    Recent work has demonstrated that graph neural networks (GNNs) can match the performance of traditional algorithms for charged particle tracking while improving scalability to meet the computing challenges posed by the HL-LHC. Most GNN tracking algorithms are based on edge classification and identify tracks as connected components from an initial graph containing spurious connections. In this talk, we consider an alternative based on object condensation (OC), a multi-objective learning framework designed to cluster points (hits) belonging to an arbitrary number of objects (tracks) and regress the properties of each object. Building on our previous results, we present a streamlined model and show progress toward a one-shot OC tracking algorithm in a high-pileup environment.Comment: 8 pages, 6 figures, 8th International Connecting The Dots Workshop (Toulouse 2023

    Train to Sustain

    Get PDF
    The HSF/IRIS-HEP Software Training group provides software training skills to new researchers in High Energy Physics (HEP) and related communities. These skills are essential to produce high-quality and sustainable software needed to do the research. Given the thousands of users in the community, sustainability, though challenging, is the centerpiece of its approach. The training modules are open source and collaborative. Different tools and platforms, like GitHub, enable technical continuity, collaboration and nurture the sense to develop software that is reproducible and reusable. This contribution describes these efforts and its broader impacts

    Inverted CERN School of Computing 2020

    No full text
    Ever been to the point where even little improvements require large refactoring efforts or dirty hacks? If only one had made the right choices on the way! This course discusses programming paradigms (functional vs object oriented, imperative vs declarative programming, ...) and introduces common design patterns (reusable solutions to common problems). While theory alone is unlikely to make you a better developer, having an overview of common principles paired with the vocabulary to describe them will make it easier to come up with the right solutions. In the exercises, we refactor various code snippets using the ideas of the lecture. Different approaches and their applicability to HEP problems are being discussed. Basic familiarity with Python is advised for the exercises. This series is aimed at scientists with little formal training in software engineering, but veterans are always welcome to join for additional input in the discussions

    Inverted CERN School of Computing 2020

    No full text
    Ever been to the point where even little improvements require large refactoring efforts or dirty hacks? If only one had made the right choices on the way! This course discusses programming paradigms (functional vs object oriented, imperative vs declarative programming, ...) and introduces common design patterns (reusable solutions to common problems). While theory alone is unlikely to make you a better developer, having an overview of common principles paired with the vocabulary to describe them will make it easier to come up with the right solutions. In the exercises, we refactor various code snippets using the ideas of the lecture. Different approaches and their applicability to HEP problems are being discussed. Basic familiarity with Python is advised for the exercises. This series is aimed at scientists with little formal training in software engineering, but veterans are always welcome to join for additional input in the discussions

    A Performance Analysis Plugin for DAQPIPE

    No full text
    In 2020 the Data Acquisition (DAQ) of the LHCb experiment will be updated to feature a trigger-free readout. This requires an event builder network consisting of about 500 nodes with a total network capacity of 4TBytes=s[1]. DAQPIPE (Data Acquisition Protocol Independent Performance Evaluator) is a tool to simulate and evaluate the performance of such a DAQ system. The current implementation of DAQPIPE only gives rough feedback about the event building rate. The aim of this 10-week summer student project was to implement network monitoring for a more detailed performance evaluation of dierent transport protocols and to spot potential bottlenecks

    Inverted CERN School of Computing 2020

    No full text
    Ever been to the point where even little improvements require large refactoring efforts or dirty hacks? If only one had made the right choices on the way! This course discusses programming paradigms (functional vs object oriented, imperative vs declarative programming, ...) and introduces common design patterns (reusable solutions to common problems). While theory alone is unlikely to make you a better developer, having an overview of common principles paired with the vocabulary to describe them will make it easier to come up with the right solutions. In the exercises, we refactor various code snippets using the ideas of the lecture. Different approaches and their applicability to HEP problems are being discussed. Basic familiarity with Python is advised for the exercises. This series is aimed at scientists with little formal training in software engineering, but veterans are always welcome to join for additional input in the discussions

    Calibration of machine learning based hadronic tagging in preparation for a Vcb measurement and clustering of kinematic distributions

    Get PDF
    Measurements of the decay rate ratios R(D(*)) of B -> D(*) tau nu and B -> D(*) ℓ nu decays (ℓ=e, mu) show a substantial tension with the predictions of the Standard Model of particle physics. This thesis explores multiple topics that can facilitate a better understanding of these so-called flavor anomalies. These flavor anomalies may point to contributions of physics beyond the Standard Model, however the large, unexplored parameter space defining such contributions makes exploratory physics studies challenging. Clustering algorithms can divide this parameter space into subsets featuring similar decay kinematics, which helps to simplify studies. I present an open-source software package that makes such techniques accessible and demonstrate its application on the B -> D(*) tau nu decay, which is speculated to be responsible for the R(D(*)) anomalies. Furthermore, I show preparations for a study of the decay e-e+ -> Upsilon(4S) -> Btag (-> hadrons) Bsig (-> D* ℓ nu) with the dataset of the Belle experiment. Besides being of interest as the normalization channel of the R(D*) ratio, this decay also allows for the measurement of the CKM matrix element Vcb and for fits to hadronic form factors. One of the significant improvements of this study over previous studies at Belle is the use of the Full Event Interpretation (FEI), a machine learning algorithm that is able to reconstruct several thousand possible Btag decays. However, this algorithm is sensitive to inaccuracies in the modeling of the Monte Carlo simulation used throughout the analysis, leading to different efficiencies on simulated and recorded data. To correct these efficiency differences, I reconstruct the decay e-e+ -> Upsilon(4S) -> Btag (-> hadrons) Bsig (-> X ℓ nu). Assuming that the reconstruction efficiencies of the X ℓ nu decay are well-understood, any efficiency difference between data and Monte Carlo simulation can be attributed to the FEI and hence used for its calibration. Along with its effect on the overall reconstruction efficiency, the calibration also affects important observables used for background subtraction in Vcb and R(D(*)) analyses. I validate the calibration with a sample of B -> D ℓ nu decays and perform additional studies to confirm the validity of core assumptions of the calibration procedure. The calibration of Btag mesons with correctly and incorrectly reconstructed flavors are found to differ, which was not accounted for in previous analyses. I successfully explore several correction strategies on a preliminary dataset. As the FEI is used heavily at both Belle and Belle II, this result has significant implications for many past and upcoming analyses. The success of the FEI also highlights the importance of using state-of-the-art software technologies in modern measurements. To deliver the best possible science, large experimental collaborations are increasingly focusing on software education. I present recent developments in the software training activities that I have coordinated at Belle II and at the High Energy Physics Software Foundation (HSF).Messungen der Verhältnisse R(D(*)) der Zerfallsraten von B -> D(*) tau nu und B -> D(*) ℓ nu (ℓ=e, mu) zeigen erhebliche Abweichungen zu den Vorhersagen des Standardmodells der Teilchenphysik. Die vorliegende Arbeit präsentiert verschiedene Ergebnisse, die zu einem verbesserten Verständnis dieser Anomalien beitragen können. Die Abweichungen können als Effekte von Physik jenseits des Standardmodells interpretiert werden. Der Parameterraum, der diese Effekte beschreibt, ist aufgrund seiner Größe jedoch eine Herausforderung für theoretische wie experimentelle Studien. Clusteranalysen können diesen Parameterraum in Teilmengen mit ähnlichen Zerfallskinematiken unterteilen, um entsprechende Untersuchungen zu vereinfachen. Ich stelle ein Open-Source-Softwarepaket vor, das derartige Methoden implementiert und wende es auf den B -> D(*) tau nu-Zerfall an, der für die R(D(*))-Anomalien verantwortlich gemacht wird. Weiterhin zeige ich Vorbereitungen für eine Messung des Zerfalls e-e+ -> Upsilon(4S) -> Btag (-> Hadronen) Bsig (-> D* ℓ nu) mit dem Datensatz des Belle-Experiments. Neben seiner Bedeutung als Normalisierungskanal der R(D*)-Observable erlaubt dieser Prozess die Messung des CKM-Matrix-Elements Vcb und die Extrapolation hadronischer Formfaktoren. Eine der entscheidenden Verbesserungen dieser Messung gegenüber Vorgängerstudien ist die Verwendung der Full Event Interpretation (FEI). Dieser Machine-Learning-Algorithmus rekonstruiert Btag-Mesonen in mehreren tausend Zerfallskanälen. Allerdings führen Ungenauigkeiten in der für die Analyse essenziellen Monte-Carlo-Simulation zu Effizienzunterschieden der FEI zwischen simulierten und gemessenen Daten. Um diese Effizienzunterschiede zu korrigieren, rekonstruiere ich den Zerfall e-e+ -> Upsilon(4S) -> Btag (-> Hadronen) Bsig (-> X ℓ nu). Unter der Annahme, dass die Rekonstruktionseffizienzen von X ℓ nu ausreichend verstanden sind, lassen sich die Effizienzunterschiede zwischen Daten und Monte-Carlo-Simulation der FEI zuordnen, sodass eine Kalibrierung vorgenommen werden kann. Neben der Korrektur der Gesamteffizienz beeinflusst die Kalibrierung auch wichtige Observablen für die Signalextraktion in Vcb- und R(D(*))-Analysen. Zur Validierung der Kalibrierung betrachte ich B -> D ℓ nu-Zerfälle und führe weitere Studien zur Bestätigung zentraler Annahmen der Kalibrierungsmethodik durch. Es zeigt sich, dass sich die Kalibrierung für Btag-Mesonen mit korrekt und inkorrekt rekonstruiertem flavor wesentlich unterscheidet, was in früheren Analysen nicht berücksichtigt wurde. Für eine Korrektur dieses Effekts präsentiere ich mehrere Strategien, die ich erfolgreich auf einen vorläufigen Datensatz anwende. Aufgrund der zentralen Rolle der FEI für eine Vielzahl von Studien am Belle- und Belle II-Experiment sind diese Resultate von weitreichender Bedeutung. Der Erfolg der FEI verdeutlicht auch die Bedeutung neuester Softwaretechniken für moderne Messungen. Um das volle Potenzial der gemessenen Daten auszuschöpfen, investieren große Kollaborationen zunehmend in die Softwarekenntnisse ihrer Mitglieder. Ich zeige von mir koordinierte Aktivitäten am Belle II-Experiment und in der High Energy Physics Software Foundation (HSF)

    Tracking with Graph Neural Networks

    No full text
    Recent work has demonstrated that graph neural networks (GNNs) trained for charged particle tracking can match the performance of traditional algorithms while improving scalability. This project uses a learned clustering strategy: GNNs are trained to embed the hits of the same particle close to each other in a latent space, such that they can easily be collected by a clustering algorithm. The project is fully open source and available at https://github.com/gnn-tracking/gnn_tracking/. In this talk, we will present the basic ideas while demonstrating the execution of our pipeline with a Jupyter notebook. We will also show how participants can plug in their own model
    corecore