2,091 research outputs found

    B flavour tagging with leptons and measurement of the CP violation phase phi_s in the B_s^0 -> J/psi phi decay at the CMS experiment

    Get PDF
    This thesis presents the development and optimization of an algorithm used to determine the flavour at production time of neutral B^0 and B_s^0 mesons. The flavour tagging algorithm developed in this thesis exploits muons and electrons produced in the semileptonic decay of the additional b-hadron produced in pp -> bbX collisions at the LHC. The charge of the lepton is used to infer the flavour of the neutral b-meson. Three simulated samples of B_s^0 -> J/psi phi, B^+ -> J/psi K^+ and B^0 -> J/psi K* decays are exploited to develop and test the algorithm. Two independent neural networks are defined for muons and electrons, trained on 24 000 and 20 400 simulated B_s^0 -> J/psi phi events respectively, to parametrize the probability of wrong flavour tag omega of the algorithm. The tagging performances are further measured and calibrated on a sample of self-tagging B^+ -> J/psi K^+ decays collected by the CMS experiment during 2012, corresponding to 20 fb^-1 . The charge of the kaon univocally determines the flavour of the B^+ at production time and allows the direct measure of the mis-identification probability. A tagging power P_tag = epsilon_tag (1 - 2 omega )^2 of 0.833 +/- 0.024 (stat.) +/- 0.012 (syst.) % is measured using muons and 0.483 +/- 0.020 (stat.) +/- 0.003 (syst.) % using electrons. Combining the two algorithms results in the overall tagging power of 1.307 ± 0.031 (stat.) ± 0.007 (syst.) %. The combined lepton flavour tagging algorithm is used in the measurement of the charge-parity (CP) violation parameters phi_s and DeltaGamma_s , sensitive to potential new physics processes not included in the standard model description. A time-dependent and flavour-tagged full angular analysis of the mu^+ mu^- K^+ K^- final state of the B_s^0 -> J/psi phi decay is performed based on the 2012 CMS dataset. A total of 49 000 reconstructed B_s^0 decays are used to extract the weak phase phi_s and decay with difference DeltaGamma_s values: phi_s = -0.075 +/- 0.097 (stat.) +/- 0.031 (syst.) rad DeltaGamma_s = 0.095 +/- 0.013 (stat.) +/- 0.007 (syst.) ps^-

    A fast and flexible machine learning approach to data quality monitoring

    Get PDF
    We present a machine learning based approach for real-time monitoring of particle detectors. The proposed strategy evaluates the compatibility between incoming batches of experimental data and a reference sample representing the data behavior in normal conditions by implementing a likelihood-ratio hypothesis test. The core model is powered by recent large-scale implementations of kernel methods, nonparametric learning algorithms that can approximate any continuous function given enough data. The resulting algorithm is fast, efficient and agnostic about the type of potential anomaly in the data. We show the performance of the model on multivariate data from a drift tube chambers muon detector

    Using Big Data Technologies for HEP Analysis

    Full text link
    The HEP community is approaching an era were the excellent performances of the particle accelerators in delivering collision at high rate will force the experiments to record a large amount of information. The growing size of the datasets could potentially become a limiting factor in the capability to produce scientific results timely and efficiently. Recently, new technologies and new approaches have been developed in industry to answer to the necessity to retrieve information as quickly as possible to analyze PB and EB datasets. Providing the scientists with these modern computing tools will lead to rethinking the principles of data analysis in HEP, making the overall scientific process faster and smoother. In this paper, we are presenting the latest developments and the most recent results on the usage of Apache Spark for HEP analysis. The study aims at evaluating the efficiency of the application of the new tools both quantitatively, by measuring the performances, and qualitatively, focusing on the user experience. The first goal is achieved by developing a data reduction facility: working together with CERN Openlab and Intel, CMS replicates a real physics search using Spark-based technologies, with the ambition of reducing 1 PB of public data in 5 hours, collected by the CMS experiment, to 1 TB of data in a format suitable for physics analysis. The second goal is achieved by implementing multiple physics use-cases in Apache Spark using as input preprocessed datasets derived from official CMS data and simulation. By performing different end-analyses up to the publication plots on different hardware, feasibility, usability and portability are compared to the ones of a traditional ROOT-based workflow

    Report from Working Group 3: Beyond the standard model physics at the HL-LHC and HE-LHC

    Get PDF
    This is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as 33 ab1^{-1} of data taken at a centre-of-mass energy of 14 TeV, and of a possible future upgrade, the High Energy (HE) LHC, defined as 1515 ab1^{-1} of data at a centre-of-mass energy of 27 TeV. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by 2050%20-50\% on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will, generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics

    Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at √s = 7 TeV

    Get PDF
    Measurements are reported of differential cross sections for the production of a W boson, which decays into a muon and a neutrino, in association with jets, as a function of several variables, including the transverse momenta (pT) and pseudorapidities of the four leading jets, the scalar sum of jet transverse momenta (HT), and the difference in azimuthal angle between the directions of each jet and the muon. The data sample of pp collisions at a centre-of-mass energy of 7 TeV was collected with the CMS detector at the LHC and corresponds to an integrated luminosity of 5.0 fb[superscript −1]. The measured cross sections are compared to predictions from Monte Carlo generators, MadGraph + pythia and sherpa, and to next-to-leading-order calculations from BlackHat + sherpa. The differential cross sections are found to be in agreement with the predictions, apart from the pT distributions of the leading jets at high pT values, the distributions of the HT at high-HT and low jet multiplicity, and the distribution of the difference in azimuthal angle between the leading jet and the muon at low values.United States. Dept. of EnergyNational Science Foundation (U.S.)Alfred P. Sloan Foundatio

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Juxtaposing BTE and ATE – on the role of the European insurance industry in funding civil litigation

    Get PDF
    One of the ways in which legal services are financed, and indeed shaped, is through private insurance arrangement. Two contrasting types of legal expenses insurance contracts (LEI) seem to dominate in Europe: before the event (BTE) and after the event (ATE) legal expenses insurance. Notwithstanding institutional differences between different legal systems, BTE and ATE insurance arrangements may be instrumental if government policy is geared towards strengthening a market-oriented system of financing access to justice for individuals and business. At the same time, emphasizing the role of a private industry as a keeper of the gates to justice raises issues of accountability and transparency, not readily reconcilable with demands of competition. Moreover, multiple actors (clients, lawyers, courts, insurers) are involved, causing behavioural dynamics which are not easily predicted or influenced. Against this background, this paper looks into BTE and ATE arrangements by analysing the particularities of BTE and ATE arrangements currently available in some European jurisdictions and by painting a picture of their respective markets and legal contexts. This allows for some reflection on the performance of BTE and ATE providers as both financiers and keepers. Two issues emerge from the analysis that are worthy of some further reflection. Firstly, there is the problematic long-term sustainability of some ATE products. Secondly, the challenges faced by policymakers that would like to nudge consumers into voluntarily taking out BTE LEI

    Penilaian Kinerja Keuangan Koperasi di Kabupaten Pelalawan

    Full text link
    This paper describe development and financial performance of cooperative in District Pelalawan among 2007 - 2008. Studies on primary and secondary cooperative in 12 sub-districts. Method in this stady use performance measuring of productivity, efficiency, growth, liquidity, and solvability of cooperative. Productivity of cooperative in Pelalawan was highly but efficiency still low. Profit and income were highly, even liquidity of cooperative very high, and solvability was good

    Search for stop and higgsino production using diphoton Higgs boson decays

    Get PDF
    Results are presented of a search for a "natural" supersymmetry scenario with gauge mediated symmetry breaking. It is assumed that only the supersymmetric partners of the top-quark (stop) and the Higgs boson (higgsino) are accessible. Events are examined in which there are two photons forming a Higgs boson candidate, and at least two b-quark jets. In 19.7 inverse femtobarns of proton-proton collision data at sqrt(s) = 8 TeV, recorded in the CMS experiment, no evidence of a signal is found and lower limits at the 95% confidence level are set, excluding the stop mass below 360 to 410 GeV, depending on the higgsino mass
    corecore