1,618 research outputs found

    Estimating a bivariate linear relationship

    Full text link
    Solutions of the bivariate, linear errors-in-variables estimation problem with unspecified errors are expected to be invariant under interchange and scaling of the coordinates. The appealing model of normally distributed true values and errors is unidentified without additional information. I propose a prior density that incorporates the fact that the slope and variance parameters together determine the covariance matrix of the unobserved true values but is otherwise diffuse. The marginal posterior density of the slope is invariant to interchange and scaling of the coordinates and depends on the data only through the sample correlation coefficient and ratio of standard deviations. It covers the interval between the two ordinary least squares estimates but diminishes rapidly outside of it. I introduce the R package leiv for computing the posterior density, and I apply it to examples in astronomy and method comparison.Comment: 27 pages, 7 figure

    The condition number of join decompositions

    Full text link
    The join set of a finite collection of smooth embedded submanifolds of a mutual vector space is defined as their Minkowski sum. Join decompositions generalize some ubiquitous decompositions in multilinear algebra, namely tensor rank, Waring, partially symmetric rank and block term decompositions. This paper examines the numerical sensitivity of join decompositions to perturbations; specifically, we consider the condition number for general join decompositions. It is characterized as a distance to a set of ill-posed points in a supplementary product of Grassmannians. We prove that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix. For some special join sets, we characterized the behavior of sequences in the join set converging to the latter's boundary points. Finally, we specialize our discussion to the tensor rank and Waring decompositions and provide several numerical experiments confirming the key results

    The average condition number of most tensor rank decomposition problems is infinite

    Full text link
    The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors with Gaussian density that the expected value of the condition number is infinite. Under some mild additional assumption, we show that the same is true for most higher ranks r≥3r\geq 3 as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 Gaussian tensors have finite expected angular condition number. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms that compute the CPD. Finally, we supply numerical experiments

    A Sensitivity Matrix Methodology for Inverse Problem Formulation

    Get PDF
    We propose an algorithm to select parameter subset combinations that can be estimated using an ordinary least-squares (OLS) inverse problem formulation with a given data set. First, the algorithm selects the parameter combinations that correspond to sensitivity matrices with full rank. Second, the algorithm involves uncertainty quantification by using the inverse of the Fisher Information Matrix. Nominal values of parameters are used to construct synthetic data sets, and explore the effects of removing certain parameters from those to be estimated using OLS procedures. We quantify these effects in a score for a vector parameter defined using the norm of the vector of standard errors for components of estimates divided by the estimates. In some cases the method leads to reduction of the standard error for a parameter to less than 1% of the estimate
    • …
    corecore