32 research outputs found

    Explorations of the viability of ARM and Xeon Phi for physics processing

    Full text link
    We report on our investigations into the viability of the ARM processor and the Intel Xeon Phi co-processor for scientific computing. We describe our experience porting software to these processors and running benchmarks using real physics applications to explore the potential of these processors for production physics processing.Comment: Submitted to proceedings of the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP13), Amsterda

    A study of CP violation in B-+/- -> DK +/- and B-+/- -> D pi(+/-) decays with D -> (KSK +/-)-K-0 pi(-/+) final states

    Get PDF
    A first study of CP violation in the decay modes B±→[KS0K±π∓]Dh±B^\pm\to [K^0_{\rm S} K^\pm \pi^\mp]_D h^\pm and B±→[KS0K∓π±]Dh±B^\pm\to [K^0_{\rm S} K^\mp \pi^\pm]_D h^\pm, where hh labels a KK or π\pi meson and DD labels a D0D^0 or D‟0\overline{D}^0 meson, is performed. The analysis uses the LHCb data set collected in pppp collisions, corresponding to an integrated luminosity of 3 fb−1^{-1}. The analysis is sensitive to the CP-violating CKM phase Îł\gamma through seven observables: one charge asymmetry in each of the four modes and three ratios of the charge-integrated yields. The results are consistent with measurements of Îł\gamma using other decay modes

    Measurement of Upsilon production in collisions at root s=2.76 TeV

    Get PDF
    The production of ΄(1S)\Upsilon(1S), ΄(2S)\Upsilon(2S) and ΄(3S)\Upsilon(3S) mesons decaying into the dimuon final state is studied with the LHCb detector using a data sample corresponding to an integrated luminosity of 3.3 pb−1pb^{-1} collected in proton-proton collisions at a centre-of-mass energy of s=2.76\sqrt{s}=2.76 TeV. The differential production cross-sections times dimuon branching fractions are measured as functions of the ΄\Upsilon transverse momentum and rapidity, over the ranges $p_{\rm T} Upsilon(1S) X) x B(Upsilon(1S) -> mu+mu-) = 1.111 +/- 0.043 +/- 0.044 nb, sigma(pp -> Upsilon(2S) X) x B(Upsilon(2S) -> mu+mu-) = 0.264 +/- 0.023 +/- 0.011 nb, sigma(pp -> Upsilon(3S) X) x B(Upsilon(3S) -> mu+mu-) = 0.159 +/- 0.020 +/- 0.007 nb, where the first uncertainty is statistical and the second systematic

    Trigger and Data Acquisition System (TriDAS) for KM3NeT-Italy

    No full text
    <p>TriDAS is a software which implements the Trigger and Data Acquisition system for the KM3NeT- Italy underwater neutrino telescope. The detector is based on ”all data to shore” approach in order to reduce the complexity of the submarine hardware. At the shore station the TriDAS collects, processes and filters all the data coming from the detector, storing triggered events to a permanent storage for subsequent analysis.</p

    Large-scale DAQ tests for the LHCb upgrade

    No full text
    The Data Acquisition (DAQ) of the LHCb experiment[1] will be upgraded in 2020 to a high-bandwidth trigger-less readout system. In the new DAQ event fragments will be forwarded to the to the Event Builder (EB) computing farm at 40 MHz. Therefore the front-end boards will be connected directly to the EB farm through optical links and PCI Express based interface cards. The EB is requested to provide a total network capacity of 32Tb/s, exploiting about 500 nodes. In order to get the required network capacity we are testing various technology and network protocols on large scale clusters. We developed on this purpose an Event Builder implementation designed for an InfiniBand interconnect infrastructure. We present the results of the measurements performed to evaluate throughput and scalability measurements on HPC scale facilities

    Realizzazione di un’infrastruttura Cloud pilota basata su OpenStack

    No full text
    Il contributo si sviluppa all’interno del progetto Marche Cloud, che prevede lo sviluppo di un’infrastrut- tura cloud basata su software open source per la Regione Marche. In questo lavoro si presenta la realizzazione del prototipo di tipo IaaS (Infrastructure as a Service) di tale infrastruttura, inizialmente installato al CNAF e successi- vamente portato presso il data center della Regione Marche ad Ancona. L’infrastruttura è basata sul software Open- Stack, installato e con gurato nelle componenti di identity service, image repository, compute node, object storage e dashboard. Nel progetto sono supportati diversi sistemi operativi e formati di immagini per le VM. Il le system distribuito GlusterFS è stato utilizzato per abilitare la funzionalità di live migration e al ne di ottenere ridondanza, performance e alta af dabilità di alcune componenti dell’infrastruttura stessa. È stato sviluppato un sistema essibile di monitoring e allarmistica sfruttando l’integrazione in OpenStack di framework esterni, speci catamente Ganglia e Nagios

    FLIT-level InfiniBand network simulations of the DAQ system of the LHCb experiment for Run-3

    No full text
    The Large Hadron Collider beauty (LHCb) experiment is designed to study the differences between particles and antiparticles as well as very rare decays in the charm and beauty sector at the (LHC). The detector will be upgraded in 2019, and a new trigger-less readout system will be implemented in order to significantly increase its efficiency and fully take advantage of the provided machine luminosity at the LHCb collision point. In the upgraded system, both event building and event filtering will be performed in software for all the data produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s, we will use custom field-programmable gate array (FPGA) readout boards (PCIe40) and the state-of-the-art off-the-shelf network technologies. The full-event-building system will require around 500 servers interconnected together. From a networking point of view, event building traffic has an all-to-all pattern, requiring careful design of the network architecture to avoid congestion at the data rates foreseen. In order to maximize link utilization, different techniques can be adopted in various areas like traffic shaping, network topology, and routing optimization. The size of the system makes it very difficult to test at production scale, before the actual procurement. We resort, therefore, to network simulations as a powerful tool for finding the optimal configuration. We will present an accurate low-level description of an InfiniBand-based network with event building like traffic. We will show a comparison between simulated and reduced scale systems and how changes in the input parameters affect the performance.The LHCb (Large Hadron Collider beauty) experiment is designed to study differences between particles and anti-particles as well as very rare decays in the charm and beauty sector at the LHC (Large Hadron Collider). The detector will be upgraded in 2019 and a new trigger-less readout system will be implemented in order to significantly increase its efficiency and take advantage of the increased machine luminosity. In the upgraded system, both event building and event filtering will be performed in software for all the data produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s we will use custom FPGA readout boards (PCIe40) and state of the art off-the-shelf network technologies. The full event building system will require around 500 nodes interconnected together. From a networking point of view, event building traffic has an all-to-all pattern, therefore it tends to create high network congestion. In order to maximize the link utilization different techniques can be adopted in various areas like traffic shaping, network topology and routing optimization. The size of the system makes it very difficult to test at production scale, before the actual procurement. We resort therefore to network simulations as a powerful tool for finding the optimal configuration. We will present an accurate low level description of an InfiniBand based network with event building like traffic. We will show comparison between simulated and real systems and how changes in the input parameters affect performances

    SuperB production system for simulated events

    No full text
    The SuperB asymmetric e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2 s-1. © 2012 IEEE

    SuperB Simulation Production System

    No full text
    The SuperB asymmetric energy e(+)e(-) collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab(-1) and a luminosity target of 10(36) cm(-2) s(-1). Since 2009 the SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and the its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from three continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. The last official SuperB production cycle has been completed; results and digests will be reported
    corecore