4 research outputs found

    On Bayes estimators with uniform priors on spheres and their comparative performance with maximum likelihood estimators for estimating bounded multivariate normal means

    No full text
    For independently distributed observables: Xi~N([theta]i,[sigma]2),i=1,...,p, we consider estimating the vector [theta]=([theta]1,...,[theta]p)' with loss ||d-[theta]||2 under the constraint , with known [tau]1,...,[tau]p,[sigma]2,m. In comparing the risk performance of Bayesian estimators [delta][alpha] associated with uniform priors on spheres of radius [alpha] centered at ([tau]1,...,[tau]p) with that of the maximum likelihood estimator , we make use of Stein's unbiased estimate of risk technique, Karlin's sign change arguments, and a conditional risk analysis to obtain for a fixed (m,p) necessary and sufficient conditions on [alpha] for [delta][alpha] to dominate . Large sample determinations of these conditions are provided. Both cases where all such [delta][alpha]'s and cases where no such [delta][alpha]'s dominate are elicited. We establish, as a particular case, that the boundary uniform Bayes estimator [delta]m dominates if and only if mRestricted parameters Point estimation Squared error loss Dominance Maximum likelihood Bayes estimators Multivariate normal Unbiased estimate of risk Sign changes Modified Bessel functions

    Topics at the interface of optimization and statistics

    Get PDF
    Optimization has been an important tool in statistics for a long time. For example, the problem of parameter estimation in a statistical model, either by maximizing a likelihood function or using least squares approach, reduces to solving an optimization problem. Not only has optimization been utilized in solving traditional statistical problems, it also plays a crucial role in more recent areas such as statistical learning. In particular, in most statistical learning models, one learns the best parameters for the model through minimizing some cost function under certain constraints. In the past decade or so, there has been an increasing trend in going to reverse direction: Using statistics as a powerful tool in optimization. As learning algorithms become more efficient, researchers have focused on finding ways to apply learning models to improve the performance of existing optimization algorithms. Following their footsteps, in this thesis, we study a recent algorithm for generating cutting planes in mixed integer linear programming problems and show how one can apply learning algorithms to improve the algorithm. In addition, we use the decision theory framework to evaluate whether the solution given by the sample average approximation, a commonly used method to solve stochastic programming problems, is ``good". In particular, we show that the sample average solution is admissible for an uncertain linear objective over a fixed compact set and for a convex quadratic function with an uncertain linear term over box constraints when the dimension is less than 4. Finally, we combine tools from mixed integer programming and Bayesian statistics to solve the catalog matching problem in astronomy, which tries to associate an object's detections coming from independent catalogs. This problem has been studied by many researchers. However, the most current algorithm to tackle the problem is only shown to work with 3 catalogs. In this thesis, we extend this algorithm to allow for matching across a higher number of catalogs. In addition, we introduce a new algorithm that is more efficient and scales much better with large number of catalogs
    corecore