20,691 research outputs found

    Bayesian Restricted Likelihood Methods: Conditioning on Insufficient Statistics in Bayesian Regression

    Full text link
    Bayesian methods have proven themselves to be successful across a wide range of scientific problems and have many well-documented advantages over competing methods. However, these methods run into difficulties for two major and prevalent classes of problems: handling data sets with outliers and dealing with model misspecification. We outline the drawbacks of previous solutions to both of these problems and propose a new method as an alternative. When working with the new method, the data is summarized through a set of insufficient statistics, targeting inferential quantities of interest, and the prior distribution is updated with the summary statistics rather than the complete data. By careful choice of conditioning statistics, we retain the main benefits of Bayesian methods while reducing the sensitivity of the analysis to features of the data not captured by the conditioning statistics. For reducing sensitivity to outliers, classical robust estimators (e.g., M-estimators) are natural choices for conditioning statistics. A major contribution of this work is the development of a data augmented Markov chain Monte Carlo (MCMC) algorithm for the linear model and a large class of summary statistics. We demonstrate the method on simulated and real data sets containing outliers and subject to model misspecification. Success is manifested in better predictive performance for data points of interest as compared to competing methods

    Robust Mean-Variance Portfolio Selection

    Get PDF
    This paper investigates model risk issues in the context of mean-variance portfolio selection. We analytically and numerically show that, under model misspecification, the use of statistically robust estimates instead of the widely used classical sample mean and covariance is highly beneficial for the stability properties of the mean-variance optimal portfolios. Moreover, we perform simulations leading to the conclusion that, under classical estimation, model risk bias dominates estimation risk bias. Finally, we suggest a diagnostic tool to warn the analyst of the presence of extreme returns that have an abnormally large influence on the optimization results.Mean-variance e .cient frontier; Outliers; Model risk; Robust es-timation

    ROBUST PARAMETER DESIGN IN COMPLEX ENGINEERING SYSTEMS:

    Get PDF
    Many industrial firms seek the systematic reduction of variability as a primary means for reducing production cost and material waste without sacrificing product quality or process efficiency. Despite notable advancements in quality-based estimation and optimization approaches aimed at achieving this goal, various gaps remain between current methodologies and observed in modern industrial environments. In many cases, models rely on assumptions that either limit their usefulness or diminish the reliability of the estimated results. This includes instances where models are generalized to a specific set of assumed process conditions, which constrains their applicability against a wider array of industrial problems. However, such generalizations often do not hold in practice. If the realities are ignored, the derived estimates can be misleading and, once applied to optimization schemes, can result in suboptimal solutions and dubious recommendations to decision makers. The goal of this research is to develop improved quality models that more fully explore innate process conditions, rely less on theoretical assumptions, and have extensions to an array of more realistic industrial environments. Several key areas are addressed in which further research can reinforce foundations, extend existing knowledge and applications, and narrow the gap between academia and industry. These include the integration of a more comprehensive approach to data analysis, the development of conditions-based approaches to tier-one and tier-two estimation, achieving cost robustness in the face of dynamic process variability, the development of new strategies for eliminating variability at the source, and the integration of trade-off analyses that balance the need for enhanced precision against associated costs. Pursuant to a detailed literature review, various quality models are proposed, and numerical examples are used to validate their use

    Testing Measurement Invariance with Ordinal Missing Data: A Comparison of Estimators and Missing Data Techniques

    Get PDF
    Ordinal missing data are common in measurement equivalence/invariance (ME/I) testing studies. However, there is a lack of guidance on the appropriate method to deal with ordinal missing data in ME/I testing. Five methods may be used to deal with ordinal missing data in ME/I testing, including the continuous full information maximum likelihood estimation method (FIML), continuous robust FIML (rFIML), FIML with probit links (pFIML), FIML with logit links (lFIML), and mean and variance adjusted weight least squared estimation method combined with pairwise deletion (WLSMV_PD). The current study evaluates the relative performance of these methods in producing valid chi-square difference tests (Δχ2) and accurate parameter estimates. The result suggests that all methods except for WLSMV_PD can reasonably control the type I error rates of (Δχ2) tests and maintain sufficient power to detect noninvariance in most conditions. Only pFIML and lFIML yield accurate factor loading estimates and standard errors across all the conditions. Recommendations are provided to researchers based on the results
    • …
    corecore