8,368 research outputs found

    The use of the tau in new particle searches at DELPHI

    Full text link
    Several new particle searches have been performed in the DELPHI experiment involving tau leptons in the resulting final state. The topology and special characteristics of the tau leptons have been used to discriminate the signal from the Standard Model background. Limits on new particles have been set, playing an important role the channels with tau leptons.Comment: Invited talk at the Seventh International Workshop on Tau Lepton Physics (TAU02), Santa Cruz, Ca, USA, Sept 2002, 10 pages, LaTeX, 9 eps figure

    Measurement of the forward-backward asymmetry in low-mass bottom-quark pairs produced in proton-antiproton collisions

    Get PDF
    We report a measurement of the forward-backward asymmetry, A[subscript FB], in b[bar over b] pairs produced in proton-antiproton collisions and identified by muons from semileptonic b-hadron decays. The event sample is collected at a center-of-mass energy of √s = 1.96 TeV with the CDF II detector and corresponds to 6.9 fb[superscript −1] of integrated luminosity. We obtain an integrated asymmetry of A[subscript FB](b[bar over b])=(1.2± 0.7)% at the particle level for b-quark pairs with invariant mass, m[subscript b[bar over b]], down to 40 GeV/c[superscript 2] and measure the dependence of A[subscript FB](b[bar over b]) on m[subscript b[bar over b]]. The results are compatible with expectations from the standard model

    File-based data flow in the CMS Filter Farm

    Get PDF
    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.National Science Foundation (U.S.)United States. Department of Energ

    Online data handling and storage at the CMS experiment

    Get PDF
    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.United States. Department of EnergyNational Science Foundation (U.S.

    Diseño conceptual, preliminar y detallado del cohete sonda recuperable “aristarco i” propulsado con propelente sólido.

    Get PDF
    En el presente documento se encuentra el desarrollo del diseño conceptual, preliminar y detallado de un cohete sonda propulsado por propelente sólido, llamado ARISTARCO I, cuya misión principal es la obtención de datos atmosféricos (presión y temperatura) por encima de un kilómetro de altitud; sin embargo, a partir de los sistemas desarrollados, el cohete a su vez puede realizar mediciones de altitud, tiempo, aceleración e inclinación del vehículo. Inicialmente se conceptualizan los componentes del cohete, y a través de variables cualitativas, se determina el tipo de cada uno de los componentes que serán implementados en el vehículo. Posteriormente se realiza el diseño preliminar, en el cual se dimensionan y evalúan los componentes que conforman el ARISTARCO I. De esta manera se conocen de forma cuantitativa, parámetros fundamentales en el desarrollo de la misión del cohete

    CMS computing operations during run 1

    Get PDF
    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015

    The new CMS DAQ system for LHC operation after 2014 (DAQ2)

    Get PDF
    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi

    10 Gbps TCP/IP streams from the FPGA for High Energy Physics

    Get PDF
    The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi
    corecore