8,802 research outputs found

    The use of the tau in new particle searches at DELPHI

    Full text link
    Several new particle searches have been performed in the DELPHI experiment involving tau leptons in the resulting final state. The topology and special characteristics of the tau leptons have been used to discriminate the signal from the Standard Model background. Limits on new particles have been set, playing an important role the channels with tau leptons.Comment: Invited talk at the Seventh International Workshop on Tau Lepton Physics (TAU02), Santa Cruz, Ca, USA, Sept 2002, 10 pages, LaTeX, 9 eps figure

    Measurement of the forward-backward asymmetry in low-mass bottom-quark pairs produced in proton-antiproton collisions

    Get PDF
    We report a measurement of the forward-backward asymmetry, A[subscript FB], in b[bar over b] pairs produced in proton-antiproton collisions and identified by muons from semileptonic b-hadron decays. The event sample is collected at a center-of-mass energy of √s = 1.96 TeV with the CDF II detector and corresponds to 6.9 fb[superscript −1] of integrated luminosity. We obtain an integrated asymmetry of A[subscript FB](b[bar over b])=(1.2± 0.7)% at the particle level for b-quark pairs with invariant mass, m[subscript b[bar over b]], down to 40 GeV/c[superscript 2] and measure the dependence of A[subscript FB](b[bar over b]) on m[subscript b[bar over b]]. The results are compatible with expectations from the standard model

    File-based data flow in the CMS Filter Farm

    Get PDF
    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.National Science Foundation (U.S.)United States. Department of Energ

    Online data handling and storage at the CMS experiment

    Get PDF
    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.United States. Department of EnergyNational Science Foundation (U.S.

    Diseño conceptual, preliminar y detallado del cohete sonda recuperable “aristarco i” propulsado con propelente sólido.

    Get PDF
    En el presente documento se encuentra el desarrollo del diseño conceptual, preliminar y detallado de un cohete sonda propulsado por propelente sólido, llamado ARISTARCO I, cuya misión principal es la obtención de datos atmosféricos (presión y temperatura) por encima de un kilómetro de altitud; sin embargo, a partir de los sistemas desarrollados, el cohete a su vez puede realizar mediciones de altitud, tiempo, aceleración e inclinación del vehículo. Inicialmente se conceptualizan los componentes del cohete, y a través de variables cualitativas, se determina el tipo de cada uno de los componentes que serán implementados en el vehículo. Posteriormente se realiza el diseño preliminar, en el cual se dimensionan y evalúan los componentes que conforman el ARISTARCO I. De esta manera se conocen de forma cuantitativa, parámetros fundamentales en el desarrollo de la misión del cohete

    CMS computing operations during run 1

    Get PDF
    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015

    Binge Drinking in Young University Students Is Associated with Alterations in Executive Functions Related to Their Starting Age

    Get PDF
    [EN] Our aim was to evaluate whether or not alcohol consumption in the form of binge drinking is associated with alterations of memory and executive functions in a population of university students. At the same time, we have studied the role of potential modulating factors, such as the APOE genotype or physical exercise.University students enrolled in academic year 2013-2014 at Escuelas Universitarias Gimbernat-Cantabria, affiliated with the University of Cantabria, were invited to participate in the study. We gathered sociodemographic data and details regarding the lifestyle of 206 students (mean age 19.55 ± 2.39; 67.5% women). We evaluated memory and executive functions via a series of validated cognitive tests. Participants were classified as binge drinkers (BD) and non-BD. Using Student's t-distribution we studied the association between cognitive tests and BD patterns. Multivariate analyses were carried out via multiple linear regression. 47.6% of the students were found to be BD. The BD differed significantly from the non-BD in their results in the executive functions test TMT B (43.41 ± 13.30 vs 37.40 ± 9.77; p = 0.0003). Adjusting by age, sex, academic records, age at which they started consuming alcohol, cannabis consumption, level of physical activity and other possible modifying variables, the association was statistically significant (p = 0.009). We noticed a statistically significant inverse correlation (Pearson's r2 = -0.192; p = 0.007) between TMT B and starting age of alcohol consumption. Differences were observed in another executive functions test, TMT A, but only in the group of women (19.73±6.1 BD vs 17.78±5.4 non-BD p = 0.05). In spite of the young age of our participants, BD was associated with a lower performance in the executive functions test (TMT B). These deficits were related to the age at which they started drinking alcohol, suggesting an accumulative effect.S

    The new CMS DAQ system for LHC operation after 2014 (DAQ2)

    Get PDF
    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.United States. Dept. of EnergyNational Science Foundation (U.S.)Marie Curie International Fellowshi
    corecore