1,602 research outputs found

    Commissioning and Operation of the New CMS Phase-1 Pixel Detector

    Full text link
    The Phase-1 upgrade of the CMS pixel detector is built out of four barrel layers (BPix) and three forward disks in each endcap (FPix). It comprises a total of 124M pixel channels in 1,856 modules, and it is designed to withstand instantaneous luminosities of up to 2×10342 \times 10^{34}\,cm2^{-2}s1^{-1}. Different parts of the detector were assembled over the last year and later brought to CERN for installation inside the CMS tracker. At various stages during the assembly tests have been performed to ensure that the readout and power electronics and the cooling system meet the design specifications. After tests of the individual components, system tests were performed before the installation inside CMS. In addition to reviewing these tests, we also present results from the final commissioning of the detector in-situ using the central CMS DAQ system. Finally we review results from the initial operation of the detector first with cosmic rays and then with pp collisions.Comment: Talk presented at the APS Division of Particles and Fields Meeting (DPF 2017), July 31-August 4, 2017, Fermilab. C17073

    Automatic log analysis with NLP for the CMS workflow handling

    Get PDF
    The central Monte-Carlo production of the CMS experiment utilizes the WLCG infrastructure and manages daily thousands of tasks, each up to thousands of jobs. The distributed computing system is bound to sustain a certain rate of failures of various types, which are currently handled by computing operators a posteriori. Within the context of computing operations, and operation intelligence, we propose a Machine Learning technique to learn from the operators with a view to reduce the operational workload and delays. This work is in continuation of CMS work on operation intelligence to try and reach accurate predictions with Machine Learning. We present an approach to consider the log files of the workflows as regular text to leverage modern techniques from Natural Language Processing (NLP). In general, log files contain a substantial amount of text that is not human language. Therefore, different log parsing approaches are studied in order to map the log files’ words to high dimensional vectors. These vectors are then exploited as feature space to train a model that predicts the action that the operator has to take. This approach has the advantage that the information of the log files is extracted automatically and the format of the logs can be arbitrary. In this work the performance of the log file analysis with NLP is presented and compared to previous approaches

    Learning Adaptive Regularization for Image Labeling Using Geometric Assignment

    Full text link
    We study the inverse problem of model parameter learning for pixelwise image labeling, using the linear assignment flow and training data with ground truth. This is accomplished by a Riemannian gradient flow on the manifold of parameters that determine the regularization properties of the assignment flow. Using the symplectic partitioned Runge--Kutta method for numerical integration, it is shown that deriving the sensitivity conditions of the parameter learning problem and its discretization commute. A convenient property of our approach is that learning is based on exact inference. Carefully designed experiments demonstrate the performance of our approach, the expressiveness of the mathematical model as well as its limitations, from the viewpoint of statistical learning and optimal control

    Macroscopic coherent structures in a stochastic neural network: from interface dynamics to coarse-grained bifurcation analysis

    Get PDF
    We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson (Phys Rev E 85(5):055,101(R), 2012), is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural field theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times

    Controllability and Qualitative properties of the solutions to SPDEs driven by boundary L\'evy noise

    Full text link
    Let uu be the solution to the following stochastic evolution equation (1) du(t,x)& = &A u(t,x) dt + B \sigma(u(t,x)) dL(t),\quad t>0; u(0,x) = x taking values in an Hilbert space \HH, where LL is a \RR valued L\'evy process, A:HHA:H\to H an infinitesimal generator of a strongly continuous semigroup, \sigma:H\to \RR bounded from below and Lipschitz continuous, and B:\RR\to H a possible unbounded operator. A typical example of such an equation is a stochastic Partial differential equation with boundary L\'evy noise. Let \CP=(\CP_t)_{t\ge 0} %{\CP_t:0\le t<\infty}thecorrespondingMarkoviansemigroup.Weshowthat,ifthesystem(2)du(t)=Au(t)dt+Bv(t),t>0u(0)=xisapproximatecontrollableintime the corresponding Markovian semigroup. We show that, if the system (2) du(t) = A u(t)\: dt + B v(t),\quad t>0 u(0) = x is approximate controllable in time T>0,thenundersomeadditionalconditionson, then under some additional conditions on Band and A,forany, for any x\in Htheprobabilitymeasure the probability measure \CP_T^\star \delta_xispositiveonopensetsof is positive on open sets of H.Secondly,asanapplication,weinvestigateunderwhichconditionon. Secondly, as an application, we investigate under which condition on %\HHandontheLeˊvyprocess and on the L\'evy process Landontheoperator and on the operator Aand and B$ the solution of Equation [1] is asymptotically strong Feller, respective, has a unique invariant measure. We apply these results to the damped wave equation driven by L\'evy boundary noise

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Measurement of t(t)over-bar normalised multi-differential cross sections in pp collisions at root s=13 TeV, and simultaneous determination of the strong coupling strength, top quark pole mass, and parton distribution functions

    Get PDF
    Peer reviewe

    Measurement of the top quark forward-backward production asymmetry and the anomalous chromoelectric and chromomagnetic moments in pp collisions at √s = 13 TeV

    Get PDF
    Abstract The parton-level top quark (t) forward-backward asymmetry and the anomalous chromoelectric (d̂ t) and chromomagnetic (μ̂ t) moments have been measured using LHC pp collisions at a center-of-mass energy of 13 TeV, collected in the CMS detector in a data sample corresponding to an integrated luminosity of 35.9 fb−1. The linearized variable AFB(1) is used to approximate the asymmetry. Candidate t t ¯ events decaying to a muon or electron and jets in final states with low and high Lorentz boosts are selected and reconstructed using a fit of the kinematic distributions of the decay products to those expected for t t ¯ final states. The values found for the parameters are AFB(1)=0.048−0.087+0.095(stat)−0.029+0.020(syst),μ̂t=−0.024−0.009+0.013(stat)−0.011+0.016(syst), and a limit is placed on the magnitude of | d̂ t| &lt; 0.03 at 95% confidence level. [Figure not available: see fulltext.

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe
    corecore