205 research outputs found

    Energy-aware Successor Tree Consistent EDF Scheduling for PCTGs on MPSoCs

    Get PDF
    Multiprocessor System-on-Chips (MPSoCs) computing architectures are gaining popularity due to their high-performance capabilities and exceptional Quality-of-Service (QoS), making them a particularly well-suited computing platform for computationally intensive workloads and applications.} Nonetheless, The scheduling and allocation of a single task set with precedence restrictions on MPSoCs have presented a persistent research challenge in acquiring energy-efficient solutions. The complexity of this scheduling problem escalates when subject to conditional precedence constraints between the tasks, creating what is known as a Conditional Task Graph (CTG). Scheduling sets of Periodic Conditional Task Graphs (PCTGs) on MPSoC platforms poses even more challenges. This paper focuses on tackling the scheduling challenge for a group of PCTGs on MPSoCs equipped with shared memory. The primary goal is to minimize the overall anticipated energy usage, considering two distinct power models: dynamic and static power models. To address this challenge, this paper introduces an innovative scheduling method named Energy Efficient Successor Tree Consistent Earliest Deadline First (EESEDF). The EESEDF approach is primarily designed to maximize the worst-case processor utilization. Once the tasks are assigned to processors, it leverages the earliest successor tree consistent deadline-first strategy to arrange tasks on each processor. To minimize the overall expected energy consumption, EESEDF solves a convex Non-Linear Program (NLP) to determine the optimal speed for each task. Additionally, the paper presents a highly efficient online Dynamic Voltage Scaling (DVS) heuristic, which operates in O(1) time complexity and dynamically adjusts the task speeds in real-time}. We achieved the average improvement, maximum improvement, and minimum improvement of EESEDF+Online-DVS 15%, 17%, and 12%, respectively compared to EESEDF alone. Furthermore, in the second set of experiments, we compared EESEDF against state-of-the-art techniques LESA and NCM. The results showed that EESEDF+Online-DVS outperformed these existing approaches, achieving notable energy efficiency improvements of 25% and 20% over LESA and NCM, respectively. \hl{Our proposed scheduler, EESEDF+Online-DVS, also achieves significant energy efficiency gains compared to existing methods. It outperforms IOETCS-Heuristic by approximately 13% while surpassing BESS and CAP-Online by impressive margins of 25% and 35%, respectively

    Sea quark effects in B_K from N_f=2 clover-improved Wilson fermions

    Full text link
    We report calculations of the parameter B_K appearing in the Delta S=2 neutral kaon mixing matrix element, whose uncertainty limits the power of unitarity triangle constraints for testing the standard model or looking for new physics. We use two flavours of dynamical clover-improved Wilson lattice fermions and look for dependence on the dynamical quark mass at fixed lattice spacing. We see some evidence for dynamical quark effects and in particular B_K decreases as the sea quark masses are reduced towards the up/down quark mass.Comment: 17 pages, 4 figures, uses JHEP3.cls, added comments and reference

    Detector Description and Performance for the First Coincidence Observations between LIGO and GEO

    Get PDF
    For 17 days in August and September 2002, the LIGO and GEO interferometer gravitational wave detectors were operated in coincidence to produce their first data for scientific analysis. Although the detectors were still far from their design sensitivity levels, the data can be used to place better upper limits on the flux of gravitational waves incident on the earth than previous direct measurements. This paper describes the instruments and the data in some detail, as a companion to analysis papers based on the first data.Comment: 41 pages, 9 figures 17 Sept 03: author list amended, minor editorial change

    Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb⁻¹ of pp collisions at \sqrts = 13 TeV with the ATLAS experiment

    Get PDF
    A search for flavour-changing neutral current (FCNC) events via the coupling of a top quark, a photon, and an up or charm quark is presented using 81 fb−1 of proton–proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. Events with a photon, an electron or muon, a b-tagged jet, and missing transverse momentum are selected. A neural network based on kinematic variables differentiates between events from signal and background processes. The data are consistent with the background-only hypothesis, and limits are set on the strength of the tqγ coupling in an effective field theory. These are also interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tuγ coupling of 36 fb (78 fb) and on the branching ratio for t→γu of 2.8×10−5 (6.1×10−5). In addition, they are interpreted as 95% CL upper limits on the cross section for FCNC tγ production via a left-handed (right-handed) tcγ coupling of 40 fb (33 fb) and on the branching ratio for t→γc of 22×10−5 (18×10−5). © 2019 The Author(s

    Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) Conceptual Design Report Volume 2: The Physics Program for DUNE at LBNF

    Get PDF
    The Physics Program for the Deep Underground Neutrino Experiment (DUNE) at the Fermilab Long-Baseline Neutrino Facility (LBNF) is described

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    The DUNE far detector vertical drift technology. Technical design report

    Get PDF
    DUNE is an international experiment dedicated to addressing some of the questions at the forefront of particle physics and astrophysics, including the mystifying preponderance of matter over antimatter in the early universe. The dual-site experiment will employ an intense neutrino beam focused on a near and a far detector as it aims to determine the neutrino mass hierarchy and to make high-precision measurements of the PMNS matrix parameters, including the CP-violating phase. It will also stand ready to observe supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector implements liquid argon time-projection chamber (LArTPC) technology, and combines the many tens-of-kiloton fiducial mass necessary for rare event searches with the sub-centimeter spatial resolution required to image those events with high precision. The addition of a photon detection system enhances physics capabilities for all DUNE physics drivers and opens prospects for further physics explorations. Given its size, the far detector will be implemented as a set of modules, with LArTPC designs that differ from one another as newer technologies arise. In the vertical drift LArTPC design, a horizontal cathode bisects the detector, creating two stacked drift volumes in which ionization charges drift towards anodes at either the top or bottom. The anodes are composed of perforated PCB layers with conductive strips, enabling reconstruction in 3D. Light-trap-style photon detection modules are placed both on the cryostat's side walls and on the central cathode where they are optically powered. This Technical Design Report describes in detail the technical implementations of each subsystem of this LArTPC that, together with the other far detector modules and the near detector, will enable DUNE to achieve its physics goals

    J/psi production as a function of charged-particle pseudorapidity density in p-Pb collisions at root s(NN)=5.02 TeV

    Get PDF
    We report measurements of the inclusive J/ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch/dη in p–Pb collisions at sNN=5.02TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ψ yield with normalised dNch/dη, measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative yield is observed for high charged-particle multiplicities. The normalised average transverse momentum at forward and backward rapidities increases with multiplicity at low multiplicities and saturates beyond moderate multiplicities. In addition, the forward-to-backward nuclear modification factor ratio is also reported, showing an increasing suppression of J/ψ production at forward rapidity with respect to backward rapidity for increasing charged-particle multiplicity

    Software performance of the ATLAS track reconstruction for LHC run 3

    Get PDF
    Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pileup) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60 pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two
    corecore