42 research outputs found

    Future Experimental Improvement for the Search of LNV Process in eμe\mu Sector

    Get PDF
    Exploring the leptonic sector in frontier experiments is more of importance nowadays, since the conservation of lepton flavor and total lepton number are not guaranteed anymore in the Standard Model after the discovery of neutrino oscillations. μ−+N(A,Z)→e++N(A,Z−2)\mu^- + N(A,Z) \rightarrow e^+ + N(A,Z-2) conversion in a muonic atom is one of the most promising channels to investigate the lepton number violation process, and the measurement of this process is planned in future μ−−e−\mu^--e^- conversion experiments with a muonic atom in a muon-stopping target. This paper discusses how to maximize the experimental sensitivity of the μ−−e+\mu^--e^+ conversion by introducing the new requirement of the mass relation of M(A,Z−2)<M(A,Z−1)M(A,Z-2)<M(A,Z-1), where M(A,Z)M(A,Z) is the mass of the muon-stopping target nucleus, to get rid of the background from radiative muon capture. The sensitivity of the μ−−e+\mu^--e^+ conversion is anticipated to have four orders of magnitude of improvement in forthcoming experiments using a proper target nucleus, which satisfies the mass relation. The most promising isotopes found are 40^{40}Ca and 32^{32}S.Comment: 8 pages, 4 figures; Figures, some numbers and a reference in text are modifie

    GPU-Accelerated Event Reconstruction for the COMET Phase-I Experiment

    Full text link
    This paper discusses a parallelized event reconstruction of the COMET Phase-I experiment. The experiment aims to discover charged lepton flavor violation by observing 104.97 MeV electrons from neutrinoless muon-to-electron conversion in muonic atoms. The event reconstruction of electrons with multiple helix turns is a challenging problem because hit-to-turn classification requires a high computation cost. The introduced algorithm finds an optimal seed of position and momentum for each turn partition by investigating the residual sum of squares based on distance-of-closest-approach (DCA) between hits and a track extrapolated from the seed. Hits with DCA less than a cutoff value are classified for the turn represented by the seed. The classification performance was optimized by tuning the cutoff value and refining the set of classified hits. The workload was parallelized over the seeds and the hits by defining two GPU kernels, which record track parameters extrapolated from the seeds and finds the DCAs of hits, respectively. A reasonable efficiency and momentum resolution was obtained for a wide momentum region which covers both signal and background electrons. The event reconstruction results from the CPU and GPU were identical to each other. The benchmarked GPUs had an order of magnitude of speedup over a CPU with 16 cores while the exact speed gains varied depending on their architectures

    Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics

    Full text link
    High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However, we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new accelerator architectures, such as GPUs. HEP experiments are now faced with the untenable prospect of rewriting millions of lines of x86 CPU code, for the increasingly dominant architectures found in these computational accelerators. This task is made more challenging by the architecture-specific languages and APIs promoted by manufacturers such as NVIDIA, Intel and AMD. Producing multiple, architecture-specific implementations is not a viable scenario, given the available person power and code maintenance issues. The Portable Parallelization Strategies team of the HEP Center for Computational Excellence is investigating the use of Kokkos, SYCL, OpenMP, std::execution::parallel and alpaka as potential portability solutions that promise to execute on multiple architectures from the same source code, using representative use cases from major HEP experiments, including the DUNE experiment of the Long Baseline Neutrino Facility, and the ATLAS and CMS experiments of the Large Hadron Collider. This cross-cutting evaluation of portability solutions using real applications will help inform and guide the HEP community when choosing their software and hardware suites for the next generation of experimental frameworks. We present the outcomes of our studies, including performance metrics, porting challenges, API evaluations, and build system integration.Comment: 18 pages, 9 Figures, 2 Table

    HFSS Simulation on Cavity Coupling for Axion Detecting Experiment

    No full text
    In the resonant cavity experiment, it is vital maximize signal power at detector with the minimized reflection from source. Return loss is minimized when the impedance of source and cavity are matched to each other and this is called impedance matching. Establishing tunable antenna on source is required to get a impedance matching. Geometry and position of antenna is varied depending on the electromagnetic eld of cavity. This research is dedicated to simulation to nd such a proper design of coupling antenna, especially for axion dark matter detecting experiment. HFSS solver was used for the simulation

    Future experimental improvement for the search of lepton-number-violating processes in the eμ sector

    Get PDF
    The conservation of lepton flavor and total lepton number are no longer guaranteed in the Standard Model after the discovery of neutrino oscillations. The μ−+N(A,Z)→e++N(A,Z−2) conversion in a muonic atom is one of the most promising channels to investigate the lepton number violation processes, and measurement of the μ−−e+ conversion is planned in future μ−−e− conversion experiments with a muonic atom in a muon-stopping target. This article discusses experimental strategies to maximize the sensitivity of the μ−−e+ conversion experiment by introducing the new requirement of the mass relation of M(A,Z−2)<M(A,Z−1), where M(A,Z) is the mass of the muon-stopping target nucleus, to eliminate the backgrounds from radiative muon capture. The sensitivity of the μ−−e+ conversion is expected to be improved by 4 orders of magnitude in forthcoming experiments using a proper target nucleus that satisfies the mass relation. The most promising isotopes found are 40Ca and 32S. © 2017 American Physical Society

    The derivation of Jacobian matrices for the propagation of track parameter uncertainties in the presence of magnetic fields and detector material

    No full text
    In high-energy physics experiments, the trajectories of charged particles are reconstructed using track reconstruction algorithms. Such algorithms need to both identify the set of measurements from a single charged particle and to fit the parameters by propagating tracks along the measurements. The propagation of the track parameter uncertainties is an important component in the track fitting to get the optimal precision in the fitted parameters. The error propagation is performed at intersections between the track and local coordinate frames defined on detector components by calculating a Jacobian matrix corresponding to the local-to-local frame transport. This paper derives the Jacobian matrix in a general manner to harmonize with numerical integration methods developed for inhomogeneous magnetic fields and materials. The Jacobian and transported covariance matrices are validated by simulating the propagation of charged particles between two frames and comparing with the results of numerical methods.In high-energy physics experiments, the trajectories of charged particles are reconstructed using track reconstruction algorithms. Such algorithms need to both identify the set of measurements from a single charged particle and to fit the parameters by propagating tracks along the measurements. The propagation of the track parameter uncertainties is an important component in the track fitting to get the optimal precision in the fitted parameters. The error propagation is performed at the surface intersections by calculating a Jacobian matrix corresponding to the surface-to-surface transport. This paper derives the Jacobian matrix in a general manner to harmonize with semi-analytical numerical integration methods developed for inhomogeneous magnetic fields and materials. The Jacobian and transported covariance matrices are validated by simulating the charged particles between two surfaces and comparing with the results of numerical methods

    Fast DAQ system with image rejection for axion dark matter searches

    No full text
    Abstract A fast data acquisition (DAQ) system for axion dark matter searches utilizing a microwave resonant cavity, also known as axion haloscope searches, has been developed with a two-channel digitizer that can sample 16-bit amplitudes at rates up to 180 MSamples/s. First, we realized a practical DAQ efficiency of greater than 99% for a single DAQ channel, where the DAQ process includes the online fast Fourier transforms (FFTs). Using an IQ mixer and two parallel DAQ channels, we then also implemented a software-based image rejection without losing the DAQ efficiency. This work extends our continuing effort to improve the figure of merit in axion haloscope searches, the scanning rate.11Nsciescopu

    ACTS GPU Track Reconstruction Demonstrator for HEP

    No full text
    In the future HEP experiments, there will be a significant increase in computing power required for track reconstruction due to the large data size. As track reconstruction is inherently parallelizable, heterogeneous computing with GPU hardware is expected to outperform the conventional CPUs. To achieve better maintainability and high quality of track reconstruction, a host-device compatible event data model and tracking geometry are necessary. However, such a flexible design can be challenging because many GPU APIs restrict the usage of modern C++ features and also have a complicated user interface. To overcome those issues, the ACTS community has launched several R&D projects: traccc as a GPU track reconstruction demonstrator, detray as a GPU geometry builder, and vecmem as a GPU memory management tool. The event data model of traccc is designed using the vecmem library, which provides an easy user interface to host and device memory allocation through C++ standard containers. For a realistic detector design, traccc utilizes the detray library which applies compile-time polymorphism in its detector description. A detray detector can be shared between the host and the device, as the detector subcomponents are serialized in a vecmem-based container. Within traccc, tracking algorithms including hit clusterization and seed finding have been ported to multiple GPU APIs. In this presentation, we highlight the recent progress in traccc and present benchmarking results of the tracking algorithms
    corecore