2,150 research outputs found

    Measurement of the top quark pair production cross section in proton-antiproton collisions at Vs=1.96 TeV : hadronic top decays with the d0 detector

    Get PDF
    Of the six quarks in the standard model the top quark is by far the heaviest: 35 times more massive than its partner the bottom quark and more than 130 times heavier than the average of the other five quarks. Its correspondingly small decay width means it tends to decay before forming a bound state. Of all quarks, therefore, the top is the least affected by quark confinement, behaving almost as a free quark. Since in the standard model top quarks couple almost exclusively to bottom quarks (t ! Wb), top quark decays provide a window on the standard model through the direct measurement of the Cabibbo-Kobayashi-Maskawa quark mixing matrix element Vtb. In the same way any lack of top quark decays into W bosons could imply the existence of decay channels beyond the standard model, for example charged Higgs bosons as expected in two-doublet Higgs models: t ! H+b. This thesis sets out to measure the top-antitop quark pair production cross section at a center-of-mass energy of ps = 1:96 TeV in the fully hadronic decay channel. The analysis is performed on 1 fb1 of Tevatron Run IIa data taken with the D0 detector between July 2002 and February 2006. A neural network is used to identify jets from b-quarks and a likelihood ratio method is used to separate signal from background. To avoid reliance on, possibly imperfect, Monte Carlo models for the modelling of the QCD background,\ud the background was modelled using a dedicated data sample. The tt signal was modelled using the alpgen and pythia Monte Carlo event generators. \ud The generated signal sample was passed through the full, geant based, D0\ud detector simulation and reconstructed using the default D0 reconstruction\ud software.\u

    Interpreting and Correcting Medical Image Classification with PIP-Net

    Full text link
    Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Net's decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Net's unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging

    Interpreting and Correcting Medical Image Classification with PIP-Net

    Get PDF
    Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Net’s decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Net’s unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.</p

    Interpreting and Correcting Medical Image Classification with PIP-Net

    Get PDF
    Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Net's decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Net's unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging

    MiniDAQ-3: Providing concurrent independent subdetector data-taking on CMS production DAQ resources

    Get PDF
    The data acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment at CERN, collects data for events accepted by the Level-1 Trigger from the different detector systems and assembles them in an event builder prior to making them available for further selection in the High Level Trigger, and finally storing the selected events for offline analysis. In addition to the central DAQ providing global acquisition functionality, several separate, so-called “MiniDAQ” setups allow operating independent data acquisition runs using an arbitrary subset of the CMS subdetectors. During Run 2 of the LHC, MiniDAQ setups were running their event builder and High Level Trigger applications on dedicated resources, separate from those used for the central DAQ. This cleanly separated MiniDAQ setups from the central DAQ system, but also meant limited throughput and a fixed number of possible MiniDAQ setups. In Run 3, MiniDAQ-3 setups share production resources with the new central DAQ system, allowing each setup to operate at the maximum Level-1 rate thanks to the reuse of the resources and network bandwidth. Configuration management tools had to be significantly extended to support the synchronization of the DAQ configurations needed for the various setups. We report on the new configuration management features and on the first year of operational experience with the new MiniDAQ-3 system

    The CMS Orbit Builder for the HL-LHC at CERN

    Get PDF
    The Compact Muon Solenoid (CMS) experiment at CERN incorporates one of the highest throughput data acquisition systems in the world and is expected to increase its throughput by more than a factor of ten for High-Luminosity phase of Large Hadron Collider (HL-LHC). To achieve this goal, the system will be upgraded in most of its components. Among them, the event builder software, in charge of assembling all the data read out from the different sub-detectors, is planned to be modified from a single event builder to an orbit builder that assembles multiple events at the same time. The throughput of the event builder will be increased from the current 1.6 Tb/s to 51 Tb/s for the HL-LHC orbit builder. This paper presents preliminary network transfer studies in preparation for the upgrade. The key conceptual characteristics are discussed, concerning differences between the CMS event builder in Run 3 and the CMS Orbit Builder for the HL-LHC. For the feasibility studies, a pipestream benchmark, mimicking event-builder-like traffic has been developed. Preliminary performance tests and results are discussed

    Towards a container-based architecture for CMS data acquisition

    Get PDF
    The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, as well as general applications such as monitoring and error reporting, are run as self-contained services. The task of deployment and operation of services is achieved by using several heterogeneous facilities, custom configuration data and scripts in several languages. In this work, we restructure the existing system into a homogeneous, scalable cloud architecture adopting a uniform paradigm, where all applications are orchestrated in a uniform environment with standardized facilities. In this new paradigm DAQ applications are organized as groups of containers and the required software is packaged into container images. Automation of all aspects of coordinating and managing containers is provided by the Kubernetes environment, where a set of physical and virtual machines is unified in a single pool of compute resources. We demonstrate that a container-based cloud architecture provides an acrossthe-board solution that can be applied for DAQ in CMS. We show strengths and advantages of running DAQ applications in a container infrastructure as compared to a traditional application model
    corecore