66 research outputs found
On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices
The problem of finding a submatrix of maximum volume of a matrix
is of interest in a variety of applications. For example, it yields a
quasi-best low-rank approximation constructed from the rows and columns of .
We show that such a submatrix can always be chosen to be a principal submatrix
if is symmetric semidefinite or diagonally dominant. Then we analyze the
low-rank approximation error returned by a greedy method for volume
maximization, cross approximation with complete pivoting. Our bound for general
matrices extends an existing result for symmetric semidefinite matrices and
yields new error estimates for diagonally dominant matrices. In particular, for
doubly diagonally dominant matrices the error is shown to remain within a
modest factor of the best approximation error. We also illustrate how the
application of our results to cross approximation for functions leads to new
and better convergence results
MATHICSE Technical Report : On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices
The problem of finding a submatrix of maximum volume of a matrix A is of interest in a variety of applications. For example, it yields a quasi-best low-rank approximation constructed from the rows and columns of A. We show that such a submatrix can always be chosen to be a principal submatrix if A is symmetric semidefinite or diagonally dominant. Then we analyze the low-rank approximation error returned by a greedy method for volume maximization, cross approximation with complete pivoting. Our bound for general matrices extends an existing result for symmetric semidefinite matrices and yields new error estimates for diagonally dominant matrices. In particular, for doubly diagonally dominant matrices the error is shown to remain within a modest factor of the best approximation error. We also illustrate how the application of our results to cross approximation for functions leads to new and better convergence results
Tropical totally positive matrices
We investigate the tropical analogues of totally positive and totally
nonnegative matrices. These arise when considering the images by the
nonarchimedean valuation of the corresponding classes of matrices over a real
nonarchimedean valued field, like the field of real Puiseux series. We show
that the nonarchimedean valuation sends the totally positive matrices precisely
to the Monge matrices. This leads to explicit polyhedral representations of the
tropical analogues of totally positive and totally nonnegative matrices. We
also show that tropical totally nonnegative matrices with a finite permanent
can be factorized in terms of elementary matrices. We finally determine the
eigenvalues of tropical totally nonnegative matrices, and relate them with the
eigenvalues of totally nonnegative matrices over nonarchimedean fields.Comment: The first author has been partially supported by the PGMO Program of
FMJH and EDF, and by the MALTHY Project of the ANR Program. The second author
is sported by the French Chateaubriand grant and INRIA postdoctoral
fellowshi
Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods
In this paper, we study matrix scaling and balancing, which are fundamental
problems in scientific computing, with a long line of work on them that dates
back to the 1960s. We provide algorithms for both these problems that, ignoring
logarithmic factors involving the dimension of the input matrix and the size of
its entries, both run in time where is the amount of error we are willing to
tolerate. Here, represents the ratio between the largest and the
smallest entries of the optimal scalings. This implies that our algorithms run
in nearly-linear time whenever is quasi-polynomial, which includes, in
particular, the case of strictly positive matrices. We complement our results
by providing a separate algorithm that uses an interior-point method and runs
in time .
In order to establish these results, we develop a new second-order
optimization framework that enables us to treat both problems in a unified and
principled manner. This framework identifies a certain generalization of linear
system solving that we can use to efficiently minimize a broad class of
functions, which we call second-order robust. We then show that in the context
of the specific functions capturing matrix scaling and balancing, we can
leverage and generalize the work on Laplacian system solving to make the
algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201
The INTERNODES method for applications in contact mechanics and dedicated preconditioning techniques
The mortar finite element method is a well-established method for the numerical solution of partial differential equations on domains displaying non-conforming interfaces. The method is known for its application in computational contact mechanics. However, its implementation remains challenging as it relies on geometrical projections and unconventional quadrature rules. The INTERNODES (INTERpolation for NOn-conforming DEcompositionS) method, instead, could overcome the implementation difficulties thanks to flexible interpolation techniques. Moreover, it was shown to be at least as accurate as the mortar method making it a very promising alternative for solving problems in contact mechanics. Unfortunately, in such situations the method requires solving a sequence of ill-conditioned linear systems. In this paper, preconditioning techniques are designed and implemented for the efficient solution of those linear systems
Combinatorial problems in solving linear systems
42 pages, available as LIP research report RR-2009-15Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today's numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices
Model Assisted Variable Clustering: Minimax-optimal Recovery and Algorithms
Model-based clustering defines population level clusters relative to a model
that embeds notions of similarity. Algorithms tailored to such models yield
estimated clusters with a clear statistical interpretation. We take this view
here and introduce the class of G-block covariance models as a background model
for variable clustering. In such models, two variables in a cluster are deemed
similar if they have similar associations will all other variables. This can
arise, for instance, when groups of variables are noise corrupted versions of
the same latent factor. We quantify the difficulty of clustering data generated
from a G-block covariance model in terms of cluster proximity, measured with
respect to two related, but different, cluster separation metrics. We derive
minimax cluster separation thresholds, which are the metric values below which
no algorithm can recover the model-defined clusters exactly, and show that they
are different for the two metrics. We therefore develop two algorithms, COD and
PECOK, tailored to G-block covariance models, and study their
minimax-optimality with respect to each metric. Of independent interest is the
fact that the analysis of the PECOK algorithm, which is based on a corrected
convex relaxation of the popular K-means algorithm, provides the first
statistical analysis of such algorithms for variable clustering. Additionally,
we contrast our methods with another popular clustering method, spectral
clustering, specialized to variable clustering, and show that ensuring exact
cluster recovery via this method requires clusters to have a higher separation,
relative to the minimax threshold. Extensive simulation studies, as well as our
data analyses, confirm the applicability of our approach.Comment: Maintext: 38 pages; supplementary information: 37 page
- âŠ