228 research outputs found

    An assessment of the statistical distribution of Random Telegraph Noise Time Constants

    Get PDF
    As transistor sizes are downscaled, a single trapped charge has a larger impact on smaller devices and the Random Telegraph Noise (RTN) becomes increasingly important. To optimize circuit design, one needs assessing the impact of RTN on the circuit and this can only be accomplished if there is an accurate statistical model of RTN. The dynamic Monte Carlo modelling requires the statistical distribution functions of both the amplitude and the capture/emission time (CET) of traps. Early works were focused on the amplitude distribution and the experimental data of CETs were typically too limited to establish their statistical distribution reliably. In particular, the time window used has been often small, e.g. 10 sec or less, so that there are few data on slow traps. It is not known whether the CET distribution extracted from such a limited time window can be used to predict the RTN beyond the test time window. The objectives of this work are three fold: to provide the long term RTN data and use them to test the CET distributions proposed by early works; to propose a methodology for characterizing the CET distribution for a fabrication process efficiently; and, for the first time, to verify the long term prediction capability of a CET distribution beyond the time window used for its extraction

    Extracting statistical distributions of RTN originating from both acceptor-like and donor-like traps

    Get PDF
    The impact of Random Telegraph Noise (RTN) on devices increases, as the device sizes are downscaled. Against a reference level, it is commonly observed that RTN can fluctuate both below and above this level. The modelling of RTN, however, was typically carried out only in the direction where drain current reduces. In reality, this current reduction can be compensated by simultaneous current increases. This calls the accuracy of the onedirectional RTN modelling into questions. Separating the fluctuation in one direction from the other is difficult experimentally. In this paper, we review the recently proposed integral methodology for achieving this separation. In contrast with early works, the integral methodology does not require selecting devices with fluctuation only in one direction. The RTN in all devices are measured and grouped together to form one dataset. It is then statically analyzed by assuming the presence of fluctuation in both directions. In this way, the separation is carried out numerically, rather than experimentally. Based on the maximum likelihood estimation, the popular statistical distributions are tested against experimental data. It is found that the General Extreme Value (GEV) distribution agrees best with the experimental threshold voltage shift, when compared with the Exponential and Lognormal distributions

    On the accuracy in modelling the statistical distribution of Random Telegraph Noise Amplitude

    Get PDF
    The power consumption of digital circuits is proportional to the square of operation voltage and the demand for low power circuits reduces the operation voltage towards the threshold of MOSFETs. A weak voltage signal makes circuits vulnerable to noise and the optimization of circuit design requires modelling noise. Random Telegraph Noise (RTN) is the dominant noise for modern CMOS technologies and Monte Carlo modelling has been used to assess its impact on circuits. This requires statistical distributions of RTN amplitude and three different distributions were proposed by early works: Lognormal, Exponential, and Gumbel distributions. They give substantially different RTN predictions and agreement has not been reached on which distribution should be used, calling the modelling accuracy into questions. The objective of this work is to assess the accuracy of these three distributions and to explore other distributions for better accuracy. A novel criterion has been proposed for selecting distributions, which requires a monotonic reduction of modelling errors with increasing number of traps. The three existing distributions do not meet this criterion and thirteen other distributions are explored. It is found that the Generalized Extreme Value (GEV) distribution has the lowest error and meet the new criterion. Moreover, to reduce modelling errors, early works used bimodal Lognormal and Exponential distributions, which have more fitting parameters. Their errors, however, are still higher than those of the monomodal GEV distribution. GEV has a long distribution tail and predicts substantially worse RTN impact. The work highlights the uncertainty in predicting the RTN distribution tail by different statistical models

    An integrated method for extracting the statistical distribution of RTN time constants

    Get PDF
    Modelling Random Telegraph Noise (RTN) is a challenging task and its accuracy is generally unknown. The Monte Carlo modelling in the time domain requires the statistical distribution of the capture and emission time (CET) constant of traps. Although a lot of efforts were made in early works to extract the CET of individual traps, the number of traps measured is generally too limited to establish the statistical distribution of CET reliably and there are disagreements on the statistical models of CET. Two models proposed by early works are Log-normal and Log-uniform distributions, which gives very different predictions for the RTN and this difference increases as the time window becomes wider. As an accurate modelling of RTN cannot be achieved without a trustable statistical distribution of CET, it is important to find a method that allows extracting the CET distribution reliably. In contrast with early works that focus on measuring the CET of individual traps, this work proposes an integrated method for extracting the statistical distribution of CET

    An Integral Methodology for Predicting Long Term RTN

    Get PDF
    Random Telegraph Noise (RTN) adversely impacts circuit performance and this impact increases for smaller devices and lower operation voltage. To optimize circuit design, many efforts have been made to model RTN. RTN is highly stochastic, with significant device-to-device variations. Early works often characterize individual traps first and then group them together to extract their statistical distributions. This bottom-up approach suffers from limitations in the number of traps it is possible to measure, especially for the capture and emission time constants, calling the reliability of extracted distributions into question. Several compact models have been proposed, but their ability to predict long term RTN is not verified. Many early works measured RTN only for tens of seconds, although a longer time window increases RTN by capturing slower traps. The aim of this work is to propose an integral methodology for modelling RTN and, for the first time, to verify its capability of predicting the long term RTN. Instead of characterizing properties of individual traps/devices, the RTN of multiple devices were integrated to form one dataset for extracting their statistical properties. This allows using the concept of effective charged traps (ECT) and transforms the need for time constant distribution to obtaining the kinetics of ECT, making long term RTN prediction similar to predicting ageing. The proposed methodology opens the way for assessing RTN impact within a window of 10 years by efficiently evaluating the probability of a device parameter at a given level

    A Pragmatic Model to Predict Future Device Aging

    Get PDF
    To predict long term device aging under use bias, models extracted from voltage accelerated tests must be extrapolated into the future. The traditional model uses a power law, to linearly fit the test data on a log-log plot, and then extrapolates aging kinetics. The challenge is that the measured data do not always follow a straight line on the log-log plot, calling the accuracy of such prediction into question. Although there are models that can fit test data well in this case, their prediction capability for future aging is typically not verified. The key advance of this work is the development of a methodology for extracting models that can verifiably predict future aging over a wide (Vg, Vd) bias space, when aging kinetics do not follow a simple power law. This is achieved by experimentally separating aging into four types of traps and modelling each of them by a straight line individually. The applicability of this methodology is verified on 3 different CMOS processes where it can predict aging at least 3 orders of magnitude into the future. The contributions of each type of traps across the (Vg, Vd) space are mapped. It is also shown that good fitting with test data does not warrant good prediction, so that good fitting should not be used as the only criterion for validating a model

    Measurement of B_{s}^{0} meson production in pp and PbPb collisions at \sqrt{SNN}

    Get PDF
    The production cross sections of B_{s}^{0} mesons and charge conjugates are measured in proton-proton (pp) and PbPb collisions via the exclusive decay channel B_{s}^{0}→J/ψϕ→μ^{+}μ^{−}K^{+}K^{−} at a center-of-mass energy of 5.02 TeV per nucleon pair and within the rapidity range |y|<2.4 using the CMS detector at the LHC. The pp measurement is performed as a function of transverse momentum (p_{T}) of the B_{s}^{0} mesons in the range of 7 to 50 GeV/c and is compared to the predictions of perturbative QCD calculations. The B_{s}^{0} production yield in PbPb collisions is measured in two p_{T} intervals, 7 to 15 and 15 to 50 GeV/c, and compared to the yield in pp collisions in the same kinematic region. The nuclear modification factor (R_{AA}) is found to be 1.5±0.6(stat)±0.5(syst) for 7–15 GeV/c, and 0.87±0.30(stat)±0.17(syst) for 15–50 GeV/c, respectively. Within current uncertainties, the B_{s}^{0} results are consistent with models of strangeness enhancement, and suppression by parton energy loss, as observed for the B+ mesons

    Measurement of the tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at √s = 13 TeV

    Get PDF
    A measurement of the top quark–antiquark pair production cross section σtt¯ in proton–proton collisions at a centre-of-mass energy of 13TeV is presented. The data correspond to an integrated luminosity of 35.9fb−1, recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events (e ± μ ∓, μ+μ−, e+e−) are selected and the cross section is measured from a likelihood fit. For a top quark mass parameter in the simulation of mMCt=172.5GeV the fit yields a measured cross section σtt¯=803±2(stat)±25(syst)±20(lumi)pb, in agreement with the expectation from the standard model calculation at next-to-next-to-leading order. A simultaneous fit of the cross section and the top quark mass parameter in the POWHEG simulation is performed. The measured value of mMCt=172.33±0.14(stat)+0.66−0.72(syst)GeV is in good agreement with previous measurements. The resulting cross section is used, together with the theoretical prediction, to determine the top quark mass and to extract a value of the strong coupling constant with different sets of parton distribution functions

    Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton-proton collisions at root s=13TeV

    Get PDF
    A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton-proton collisions at a center-of-mass energy of 13 TeV collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9 fb(-1). The signal is characterized by a large missing transverse momentum recoiling against a bottom quark-antiquark system that has a large Lorentz boost. The number of events observed in the data is consistent with the standard model background prediction. Results are interpreted in terms of limits both on parameters of the type-2 two-Higgs doublet model extended by an additional light pseudoscalar boson a (2HDM+a) and on parameters of a baryonic Z simplified model. The 2HDM+a model is tested experimentally for the first time. For the baryonic Z model, the presented results constitute the most stringent constraints to date.Peer reviewe
    corecore