1,160,757 research outputs found

    Likelihood decision functions

    Get PDF
    In both classical and Bayesian approaches, statistical inference is unified and generalized by the corresponding decision theory. This is not the case for the likelihood approach to statistical inference, in spite of the manifest success of the likelihood methods in statistics. The goal of the present work is to fill this gap, by extending the likelihood approach in order to cover decision making as well. The resulting decision functions, called likelihood decision functions, generalize the usual likelihood methods (such as ML estimators and LR tests), in the sense that these methods appear as the likelihood decision functions in particular decision problems. In general, the likelihood decision functions maintain some key properties of the usual likelihood methods, such as equivariance and asymptotic optimality. By unifying and generalizing the likelihood approach to statistical inference, the present work offers a new perspective on statistical methodology and on the connections among likelihood methods

    Parameter Estimation in Semi-Linear Models Using a Maximal Invariant Likelihood Function

    Get PDF
    In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King (2001) have derived the probability density functions of the maximal invariant statistic for the nonlinear component of these models. Using these density functions as likelihood functions allows us to estimate these models in a two-step process. First the nonlinear component parameters are estimated by maximising the maximal invariant likelihood function. Then the nonlinear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.Maximum likelihood estimation, nonlinear modelling, simulation experiment, two-step estimation.

    Maximum Smoothed Likelihood Component Density Estimation in Mixture Models with Known Mixing Proportions

    Full text link
    In this paper, we propose a maximum smoothed likelihood method to estimate the component density functions of mixture models, in which the mixing proportions are known and may differ among observations. The proposed estimates maximize a smoothed log likelihood function and inherit all the important properties of probability density functions. A majorization-minimization algorithm is suggested to compute the proposed estimates numerically. In theory, we show that starting from any initial value, this algorithm increases the smoothed likelihood function and further leads to estimates that maximize the smoothed likelihood function. This indicates the convergence of the algorithm. Furthermore, we theoretically establish the asymptotic convergence rate of our proposed estimators. An adaptive procedure is suggested to choose the bandwidths in our estimation procedure. Simulation studies show that the proposed method is more efficient than the existing method in terms of integrated squared errors. A real data example is further analyzed

    Renormalization group computation of likelihood functions for cosmological data sets

    Get PDF
    I show how a renormalization group (RG) method can be used to incrementally integrate the information in cosmological large-scale structure data sets (including CMB, galaxy redshift surveys, etc.). I show numerical tests for Gaussian fields, where the method allows arbitrarily close to exact computation of the likelihood function in order N\sim N time, even for problems with no symmetry, compared to N3N^3 for brute force linear algebra (where NN is the number of data points -- to be fair, methods already exist to solve the Gaussian problem in at worst NlogNN \log N time, and this method will not necessarily be faster in practice). The method requires no sampling or other Monte Carlo (random) element. Non-linearity/non-Gaussianity can be accounted for to the extent that terms generated by integrating out small scale modes can be projected onto a sufficient basis, e.g., at least in the sufficiently perturbative regime. The formulas to evaluate are straightforward and require no understanding of quantum field theory, but this paper may also serve as a pedagogical introduction to Wilsonian RG for astronomers.Comment: 13 pg, 4 fi

    Maximum Likelihood for Matrices with Rank Constraints

    Full text link
    Maximum likelihood estimation is a fundamental optimization problem in statistics. We study this problem on manifolds of matrices with bounded rank. These represent mixtures of distributions of two independent discrete random variables. We determine the maximum likelihood degree for a range of determinantal varieties, and we apply numerical algebraic geometry to compute all critical points of their likelihood functions. This led to the discovery of maximum likelihood duality between matrices of complementary ranks, a result proved subsequently by Draisma and Rodriguez.Comment: 22 pages, 1 figur
    corecore