38,087 research outputs found

    Computing all roots of the likelihood equations of seemingly unrelated regressions

    Get PDF
    Seemingly unrelated regressions are statistical regression models based on the Gaussian distribution. They are popular in econometrics but also arise in graphical modeling of multivariate dependencies. In maximum likelihood estimation, the parameters of the model are estimated by maximizing the likelihood function, which maps the parameters to the likelihood of observing the given data. By transforming this optimization problem into a polynomial optimization problem, it was recently shown that the likelihood function of a simple bivariate seemingly unrelated regressions model may have several stationary points. Thus local maxima may complicate maximum likelihood estimation. In this paper, we study several more complicated seemingly unrelated regression models, and show how all stationary points of the likelihood function can be computed using algebraic geometry.Comment: To appear in the Journal of Symbolic Computation, special issue on Computational Algebraic Statistics. 11 page

    Maximum Likelihood for Matrices with Rank Constraints

    Full text link
    Maximum likelihood estimation is a fundamental optimization problem in statistics. We study this problem on manifolds of matrices with bounded rank. These represent mixtures of distributions of two independent discrete random variables. We determine the maximum likelihood degree for a range of determinantal varieties, and we apply numerical algebraic geometry to compute all critical points of their likelihood functions. This led to the discovery of maximum likelihood duality between matrices of complementary ranks, a result proved subsequently by Draisma and Rodriguez.Comment: 22 pages, 1 figur

    Maximum likelihood geometry in the presence of data zeros

    Full text link
    Given a statistical model, the maximum likelihood degree is the number of complex solutions to the likelihood equations for generic data. We consider discrete algebraic statistical models and study the solutions to the likelihood equations when the data contain zeros and are no longer generic. Focusing on sampling and model zeros, we show that, in these cases, the solutions to the likelihood equations are contained in a previously studied variety, the likelihood correspondence. The number of these solutions give a lower bound on the ML degree, and the problem of finding critical points to the likelihood function can be partitioned into smaller and computationally easier problems involving sampling and model zeros. We use this technique to compute a lower bound on the ML degree for 2×2×2×22 \times 2 \times 2 \times 2 tensors of border rank ≤2\leq 2 and 3×n3 \times n tables of rank ≤2\leq 2 for n=11,12,13,14n=11, 12, 13, 14, the first four values of nn for which the ML degree was previously unknown

    Numerical algebraic geometry for model selection and its application to the life sciences

    Full text link
    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation, and model selection. These are all optimization problems, well-known to be challenging due to non-linearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data is available. Here, we consider polynomial models (e.g., mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometric structures relating models and data, and we demonstrate its utility on examples from cell signaling, synthetic biology, and epidemiology.Comment: References added, additional clarification

    Solving the 100 Swiss Francs Problem

    Full text link
    Sturmfels offered 100 Swiss Francs in 2005 to a conjecture, which deals with a special case of the maximum likelihood estimation for a latent class model. This paper confirms the conjecture positively

    Data-Discriminants of Likelihood Equations

    Full text link
    Maximum likelihood estimation (MLE) is a fundamental computational problem in statistics. The problem is to maximize the likelihood function with respect to given data on a statistical model. An algebraic approach to this problem is to solve a very structured parameterized polynomial system called likelihood equations. For general choices of data, the number of complex solutions to the likelihood equations is finite and called the ML-degree of the model. The only solutions to the likelihood equations that are statistically meaningful are the real/positive solutions. However, the number of real/positive solutions is not characterized by the ML-degree. We use discriminants to classify data according to the number of real/positive solutions of the likelihood equations. We call these discriminants data-discriminants (DD). We develop a probabilistic algorithm for computing DDs. Experimental results show that, for the benchmarks we have tried, the probabilistic algorithm is more efficient than the standard elimination algorithm. Based on the computational results, we discuss the real root classification problem for the 3 by 3 symmetric matrix~model.Comment: 2 table
    • …
    corecore