2,292,310 research outputs found

    A novel methodology to create generative statistical models of interconnects

    Get PDF
    This paper addresses the problem of constructing a generative statistical model for an interconnect starting from a limited set of S-parameter samples, which are obtained by simulating or measuring the interconnect for a few random realizations of its stochastic physical properties. These original samples are first converted into a pole-residue representation with common poles. The corresponding residues are modeled as a correlated stochastic process by means of principal component analysis and kernel density estimation. The obtained model allows generating new samples with similar statistics as the original data. A passivity check is performed over the generated samples to retain only passive data. The proposed approach is applied to a representative coupled microstrip line example

    A CONSISTENT SPECIFICATION TEST FOR MODELS DEFINED BY CONDITIONAL MOMENT RESTRICTIONS

    Get PDF
    This article addresses statistical inference in models defined by conditional moment restrictions. Our motivation comes from two observations. First, generalized method of moments, which is the most popular methodology for statistical inference for these models, provides a unified methodology for statistical inference, but it yields inconsistent statistical procedures. Second, consistent specification testing for these models has abandoned a unified approach by regarding as unrelated parameter estimation and model checking. In this article, we provide a consistent specification test, which allows us to propose a simple unified methodology that yields consistent statistical procedures. Although the test enjoys optimality properties, the asymptotic distribution of the considered test statistic depends on the specific data generating process. Therefore, standard asymptotic inference procedures are not feasible. Nevertheless, we show that a simple original wild bootstrap procedure properly estimates the asymptotic null distribution of the test statistic.

    An evaluation of the quality of statistical design and analysis of published medical research : results from a systematic survey of general orthopaedic journals

    Get PDF
    Background: The application of statistics in reported research in trauma and orthopaedic surgery has become ever more important and complex. Despite the extensive use of statistical analysis, it is still a subject which is often not conceptually well understood, resulting in clear methodological flaws and inadequate reporting in many papers. Methods: A detailed statistical survey sampled 100 representative orthopaedic papers using a validated questionnaire that assessed the quality of the trial design and statistical analysis methods. Results: The survey found evidence of failings in study design, statistical methodology and presentation of the results. Overall, in 17% (95% confidence interval; 10–26%) of the studies investigated the conclusions were not clearly justified by the results, in 39% (30–49%) of studies a different analysis should have been undertaken and in 17% (10–26%) a different analysis could have made a difference to the overall conclusions. Conclusion: It is only by an improved dialogue between statistician, clinician, reviewer and journal editor that the failings in design methodology and analysis highlighted by this survey can be addressed

    Appropriate Methodology of Statistical Tests According to Prior Probability and Required Objectivity

    Get PDF
    In contrast to its common definition and calculation, interpretation of p-values diverges among statisticians. Since p-value is the basis of various methodologies, this divergence has led to a variety of test methodologies and evaluations of test results. This chaotic situation has complicated the application of tests and decision processes. Here, the origin of the divergence is found in the prior probability of the test. Effects of difference in Pr(H0 = true) on the character of p-values are investigated by comparing real microarray data and its artificial imitations as subjects of Student's t-tests. Also, the importance of the prior probability is discussed in terms of the applicability of Bayesian approaches. Suitable methodology is found in accordance with the prior probability and purpose of the test.Comment: 16 pages, 3 figures, and 1 tabl

    Bayesian statistical analysis of ground-clutter for the relative calibration of dual polarization weather radars

    Get PDF
    A new data processing methodology, based on the statistical analysis of ground-clutter echoes and aimed at investigating the stability of the weather radar relative calibration, is presented. A Bayesian classification scheme has been used to identify meteorological and/or ground-clutter echoes. The outcome is evaluated on a training dataset using statistical score indexes through the comparison with a deterministic clutter map. After discriminating the ground clutter areas, we have focused on the spatial analysis of robust and stable returns by using an automated region-merging algorithm. The temporal series of the ground-clutter statistical parameters, extracted from the spatial analysis and expressed in terms of percentile and mean values, have been used to estimate the relative clutter calibration and its uncertainty for both co-polar and differential reflectivity. The proposed methodology has been applied to a dataset collected by a C-band weather radar in southern Italy

    Accurate simulations of the interplay between process and statistical variability for nanoscale FinFET-based SRAM cell stability

    Get PDF
    In this paper we illustrate how by using advanced atomistic TCAD tools the interplay between long-range process variation and short-range statistical variability in FinFETs can be accurately modelled and simulated for the purposes of Design-Technology Co-Optimization (DTCO). The proposed statistical simulation and compact modelling methodology is demonstrated via a comprehensive evaluation of the impact of FinFET variability on SRAM cell stability

    APSS - Software support for decision making in statistical process control

    Get PDF
    DOI nefunkční (7.1.2019)Purpose: SPC can be defined as the problem solving process incorporating many separate decisions including selection of the control chart based on the verification of the data presumptions. There is no professional statistical software which enables to make such decisions in a complex way. Methodology/Approach: There are many excellent professional statistical programs but without complex methodology for selection of the best control chart. Proposed program in Excel APSS (Analysis of the Process Statistical Stability) solves this problem and also offers additional learning functions. Findings: The created SW enables to link altogether separate functions of selected professional statistical programs (data presumption verification, control charts construction and interpretation) and supports active learning in this field. Research Limitation/implication: The proposed SW can be applied to control charts covered by SW Statgraphics Centurion and Minitab. But there is no problem to modify it for other professional statistical SW. Originality/Value of paper: The paper prezents the original SW created in the frame of the research activities at the Department of Quality Management of FMT, VSB-TUO, Czech Republic. SW enables to link altogether separate functions of the professional statistical SW needed for the complex realization of statitical process control and it is very strong tool for the active learning of statistical process control tasks.Web of Science223261
    corecore