1,192 research outputs found

    Model Selection with the Loss Rank Principle

    Full text link
    A key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g., the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle - the Loss Rank Principle (LoRP) - for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN.Comment: 31 LaTeX pages, 1 figur

    The influence of CEO characteristics on corporate environmental performance of SMEs: Evidence from Vietnamese SMEs

    Get PDF
    Drawing on upper echelon theory, this study investigates the impact of CEOs’ (chief executive officers) demographic characteristics on corporate environmental performance (CEP) in small and medium-sized enterprises (SMEs). We hypothesized that CEO characteristics, including gender, age, basic educational level, professional educational level, political connection, and ethnicity, affect SMEs’ environmental performance. Using the cross-sectional data analysis of 810 Vietnamese SMEs, this study provides evidence that female CEOs and CEOs’ educational level (both basic and professional) are positively related to the probability of CEP. We also find that based on the role of institutional environment on CEP, political connections had a negative effect on CEP in the context of Vietnam. Another finding is that SMEs with chief executives from ethnic minority groups show a higher level of the probability of corporate environmental performance than companies operated by Kinh chief executives. Since CEP is an essential dimension of corporate social responsibility, a strategic decision for SMEs, it is crucial for the company to select appropriate CEOs based on their demographic characteristic

    Variational Bayes with Intractable Likelihood

    Full text link
    Variational Bayes (VB) is rapidly becoming a popular tool for Bayesian inference in statistical modeling. However, the existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes the use of VB in many interesting situations such as in state space models and in approximate Bayesian computation (ABC), where application of VB methods was previously impossible. This paper extends the scope of application of VB to cases where the likelihood is intractable, but can be estimated unbiasedly. The proposed VB method therefore makes it possible to carry out Bayesian inference in many statistical applications, including state space models and ABC. The method is generic in the sense that it can be applied to almost all statistical models without requiring too much model-based derivation, which is a drawback of many existing VB algorithms. We also show how the proposed method can be used to obtain highly accurate VB approximations of marginal posterior distributions.Comment: 40 pages, 6 figure

    Model Selection by Loss Rank for Classification and Unsupervised Learning

    No full text
    Hutter (2007) recently introduced the loss rank principle (LoRP) as a general purpose principle for model selection. The LoRP enjoys many attractive properties and deserves further investigations. The LoRP has been well-studied for regression framework in Hutter and Tran (2010). In this paper, we study the LoRP for classification framework, and develop it further for model selection problems in unsupervised learning where the main interest is to describe the associations between input measurements, like cluster analysis or graphical modelling. Theoretical properties and simulation studies are presented

    Quantum Natural Gradient for Variational Bayes

    Full text link
    Variational Bayes (VB) is a critical method in machine learning and statistics, underpinning the recent success of Bayesian deep learning. The natural gradient is an essential component of efficient VB estimation, but it is prohibitively computationally expensive in high dimensions. We propose a hybrid quantum-classical algorithm to improve the scaling properties of natural gradient computation and make VB a truly computationally efficient method for Bayesian inference in highdimensional settings. The algorithm leverages matrix inversion from the linear systems algorithm by Harrow, Hassidim, and Lloyd [Phys. Rev. Lett. 103, 15 (2009)] (HHL). We demonstrate that the matrix to be inverted is sparse and the classical-quantum-classical handoffs are sufficiently economical to preserve computational efficiency, making the problem of natural gradient for VB an ideal application of HHL. We prove that, under standard conditions, the VB algorithm with quantum natural gradient is guaranteed to converge. Our regression-based natural gradient formulation is also highly useful for classical VB

    Exact ABC using Importance Sampling

    Get PDF
    Approximate Bayesian Computation (ABC) is a powerful method for carrying out Bayesian inference when the likelihood is computationally intractable. However, a draw- back of ABC is that it is an approximate method that induces a systematic error because it is necessary to set a tolerance level to make the computation tractable. The issue of how to optimally set this tolerance level has been the subject of extensive research. This paper proposes an ABC algorithm based on importance sampling that estimates expec- tations with respect to the exact posterior distribution given the observed summary statistics. This overcomes the need to select the tolerance level. By exact we mean that there is no systematic error and the Monte Carlo error can be made arbitrarily small by increasing the number of importance samples. We provide a formal justifica- tion for the method and study its convergence properties. The method is illustrated in two applications and the empirical results suggest that the proposed ABC based esti- mators consistently converge to the true values as the number of importance samples increases. Our proposed approach can be applied more generally to any importance sampling problem where an unbiased estimate of the likelihood is required
    corecore