4,961 research outputs found

    Regression and Singular Value Decomposition in Dynamic Graphs

    Full text link
    Most of real-world graphs are {\em dynamic}, i.e., they change over time. However, while problems such as regression and Singular Value Decomposition (SVD) have been studied for {\em static} graphs, they have not been investigated for {\em dynamic} graphs, yet. In this paper, we introduce, motivate and study regression and SVD over dynamic graphs. First, we present the notion of {\em update-efficient matrix embedding} that defines the conditions sufficient for a matrix embedding to be used for the dynamic graph regression problem (under l2l_2 norm). We prove that given an n×mn \times m update-efficient matrix embedding (e.g., adjacency matrix), after an update operation in the graph, the optimal solution of the graph regression problem for the revised graph can be computed in O(nm)O(nm) time. We also study dynamic graph regression under least absolute deviation. Then, we characterize a class of matrix embeddings that can be used to efficiently update SVD of a dynamic graph. For adjacency matrix and Laplacian matrix, we study those graph update operations for which SVD (and low rank approximation) can be updated efficiently

    Towards an exact reconstruction of a time-invariant model from time series data

    Get PDF
    Dynamic processes in biological systems may be profiled by measuring system properties over time. One way of representing such time series data is through weighted interaction networks, where the nodes in the network represent the measurables and the weighted edges represent interactions between any pair of nodes. Construction of these network models from time series data may involve seeking a robust data-consistent and time-invariant model to approximate and describe system dynamics. Many problems in mathematics, systems biology and physics can be recast into this form and may require finding the most consistent solution to a set of first order differential equations. This is especially challenging in cases where the number of data points is less than or equal to the number of measurables. We present a novel computational method for network reconstruction with limited time series data. To test our method, we use artificial time series data generated from known network models. We then attempt to reconstruct the original network from the time series data alone. We find good agreement between the original and predicted networks

    A Threshold Regularization Method for Inverse Problems

    Get PDF
    A number of regularization methods for discrete inverse problems consist in considering weighted versions of the usual least square solution. However, these so-called filter methods are generally restricted to monotonic transformations, e.g. the Tikhonov regularization or the spectral cut-off. In this paper, we point out that in several cases, non-monotonic sequences of filters are more efficient. We study a regularization method that naturally extends the spectral cut-off procedure to non-monotonic sequences and provide several oracle inequalities, showing the method to be nearly optimal under mild assumptions. Then, we extend the method to inverse problems with noisy operator and provide efficiency results in a newly introduced conditional framework

    Rank Test Based On Matrix Perturbation Theory

    Get PDF
    In this paper, we propose methods of the determination of the rank of matrix. We consider a rank test for an unobserved matrix for which an estimate exists having normal asymptotic distribution of order N1/2 where N is the sample size. The test statistic is based on the smallest estimated singular values. Using Matrix Perturbation Theory, the smallest singular values of random matrix converge asymptotically to zero in the order O(N-1) and the corresponding left and right singular vectors converge asymptotically in the order O(N-1/2). Moreover, the asymptotic distribution of the test statistic is seen to be chi-squared. The test has advantages over standard tests in being easier to compute. Two approaches are be considered sequential testing strategy and information theoretic criterion. We establish a strongly consistent of the determination of the rank of matrix using both the two approaches. Some economic applications are discussed and simulation evidence is given for this test. Its performance is compared to that of the LDU rank tests of Gill and Lewbel (1992) and Cragg and Donald (1996).Rank Testing; Matrix Perturbation Theory; Rank Estimation; Singular Value Decomposition; Sequential Testing Procedure; Information Theoretic Criterion.

    Convergence analysis of a proximal Gauss-Newton method

    Full text link
    An extension of the Gauss-Newton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and the numerical performance of the method is assessed on some significant test problems
    corecore