64 research outputs found

    Feedback and generalized logic

    Get PDF
    Although the distinction between software and hardware is a posteriori, there is an a priori distinction that masquerades as the software—hardware distinction. This is the distinction between procedure interconnection, the semantics of flow chart diagrams, which is known to be described by the regular expression calculus; and system interconnection, the semantics of network diagrams, which is described by a certain logical calculus, dual to a calculus of regular expressions. This paper presents a proof of the duality in a special case, and gives the interpretation of the logical calculus for sequential machine interconnection. A minimal realization theorem for feedback systems is proved, which specializes to known open loop minimal realization theorems

    A principled approach to programming with nested types in Haskell

    Get PDF
    Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell

    Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel

    Get PDF
    A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. © 2014 Macmillan Publishers Limited. All rights reserved

    Measurement of the azimuthal anisotropy of Y(1S) and Y(2S) mesons in PbPb collisions at √S^{S}NN = 5.02 TeV

    Get PDF
    The second-order Fourier coefficients (υ2_{2}) characterizing the azimuthal distributions of ΄(1S) and ΄(2S) mesons produced in PbPb collisions at sNN\sqrt{s_{NN}} = 5.02 TeV are studied. The ΄mesons are reconstructed in their dimuon decay channel, as measured by the CMS detector. The collected data set corresponds to an integrated luminosity of 1.7 nb−1^{-1}. The scalar product method is used to extract the υ2_{2} coefficients of the azimuthal distributions. Results are reported for the rapidity range |y| < 2.4, in the transverse momentum interval 0 < pT_{T} < 50 GeV/c, and in three centrality ranges of 10–30%, 30–50% and 50–90%. In contrast to the J/ψ mesons, the measured υ2_{2} values for the ΄ mesons are found to be consistent with zero

    Measurement of prompt D0^{0} and D‟\overline{D}0^{0} meson azimuthal anisotropy and search for strong electric fields in PbPb collisions at root SNN\sqrt{S_{NN}} = 5.02 TeV

    Get PDF
    The strong Coulomb field created in ultrarelativistic heavy ion collisions is expected to produce a rapiditydependent difference (Av2) in the second Fourier coefficient of the azimuthal distribution (elliptic flow, v2) between D0 (uc) and D0 (uc) mesons. Motivated by the search for evidence of this field, the CMS detector at the LHC is used to perform the first measurement of Av2. The rapidity-averaged value is found to be (Av2) = 0.001 ? 0.001 (stat)? 0.003 (syst) in PbPb collisions at ?sNN = 5.02 TeV. In addition, the influence of the collision geometry is explored by measuring the D0 and D0mesons v2 and triangular flow coefficient (v3) as functions of rapidity, transverse momentum (pT), and event centrality (a measure of the overlap of the two Pb nuclei). A clear centrality dependence of prompt D0 meson v2 values is observed, while the v3 is largely independent of centrality. These trends are consistent with expectations of flow driven by the initial-state geometry. ? 2021 The Author. Published by Elsevier B.V. This is an open access article under the CC BY licens

    Performance of the CMS Level-1 trigger in proton-proton collisions at √s = 13 TeV

    Get PDF
    At the start of Run 2 in 2015, the LHC delivered proton-proton collisions at a center-of-mass energy of 13\TeV. During Run 2 (years 2015–2018) the LHC eventually reached a luminosity of 2.1× 1034^{34} cm−2^{-2}s−1^{-1}, almost three times that reached during Run 1 (2009–2013) and a factor of two larger than the LHC design value, leading to events with up to a mean of about 50 simultaneous inelastic proton-proton collisions per bunch crossing (pileup). The CMS Level-1 trigger was upgraded prior to 2016 to improve the selection of physics events in the challenging conditions posed by the second run of the LHC. This paper describes the performance of the CMS Level-1 trigger upgrade during the data taking period of 2016–2018. The upgraded trigger implements pattern recognition and boosted decision tree regression techniques for muon reconstruction, includes pileup subtraction for jets and energy sums, and incorporates pileup-dependent isolation requirements for electrons and tau leptons. In addition, the new trigger calculates high-level quantities such as the invariant mass of pairs of reconstructed particles. The upgrade reduces the trigger rate from background processes and improves the trigger efficiency for a wide variety of physics signals

    Studies of charm and beauty hadron long-range correlations in pp and pPb collisions at LHC energies

    Get PDF

    Measurement of the Y(1S) pair production cross section and search for resonances decaying to Y(1S)ÎŒâșΌ⁻ in proton-proton collisions at √s = 13 TeV

    Get PDF
    The fiducial cross section for Y(1S)pair production in proton-proton collisions at a center-of-mass energy of 13TeVin the region where both Y(1S)mesons have an absolute rapidity below 2.0 is measured to be 79 ± 11 (stat) ±6 (syst) ±3 (B)pbassuming the mesons are produced unpolarized. The last uncertainty corresponds to the uncertainty in the Y(1S)meson dimuon branching fraction. The measurement is performed in the final state with four muons using proton-proton collision data collected in 2016 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9fb−1^{-1}. This process serves as a standard model reference in a search for narrow resonances decaying to Y(1S)ÎŒ+^{+}Ό−^{-} in the same final state. Such a resonance could indicate the existence of a tetraquark that is a bound state of two bquarks and two b̅ antiquarks. The tetraquark search is performed for masses in the vicinity of four times the bottom quark mass, between 17.5 and 19.5GeV, while a generic search for other resonances is performed for masses between 16.5 and 27GeV. No significant excess of events compatible with a narrow resonance is observed in the data. Limits on the production cross section times branching fraction to four muons via an intermediate Y(1S)resonance are set as a function of the resonance mass

    Pileup mitigation at CMS in 13 TeV data

    Get PDF
    With increasing instantaneous luminosity at the LHC come additional reconstruction challenges. At high luminosity, many collisions occur simultaneously within one proton-proton bunch crossing. The isolation of an interesting collision from the additional "pileup" collisions is needed for effective physics performance. In the CMS Collaboration, several techniques capable of mitigating the impact of these pileup collisions have been developed. Such methods include charged-hadron subtraction, pileup jet identification, isospin-based neutral particle "ÎŽÎČ" correction, and, most recently, pileup per particle identification. This paper surveys the performance of these techniques for jet and missing transverse momentum reconstruction, as well as muon isolation. The analysis makes use of data corresponding to 35.9 fb−1^{-1} collected with the CMS experiment in 2016 at a center-of-mass energy of 13 TeV. The performance of each algorithm is discussed for up to 70 simultaneous collisions per bunch crossing. Significant improvements are found in the identification of pileup jets, the jet energy, mass, and angular resolution, missing transverse momentum resolution, and muon isolation when using pileup per particle identification
    • 

    corecore