834 research outputs found

    Computing the Nearest Doubly Stochastic Matrix with A Prescribed Entry

    Get PDF
    In this paper a nearest doubly stochastic matrix problem is studied. This problem is to ÂŻnd the closest doubly stochastic matrix with the prescribed (1; 1) entry to a given matrix. According to the well-established dual theory in optimization, the dual of the underlying problem is an unconstrained diÂźerentiable but not twice diÂźerentiable convex optimization problem. A Newton-type method is used for solving the associated dual problem and then the desired nearest doubly stochastic matrix is obtained. Under some mild assumptions, the quadratic convergence of the proposed Newton's method is proved. The numerical performance of the method is also demonstrated by numerical examples

    On the Geometry of the Birkhoff Polytope. II. The Schatten pp-norms

    Full text link
    In the first of this series of two articles, we studied some geometrical aspects of the Birkhoff polytope, the compact convex set of all n×nn \times n doubly stochastic matrices, namely the Chebyshev center, and the Chebyshev radius of the Birkhoff polytope associated with metrics induced by the operator norms from ℓnp\ell_n^p to ℓnp\ell_n^p for 1≀p≀∞1 \leq p \leq \infty. In the present paper, we take another look at those very questions, but for a different family of matrix norms, namely the Schatten pp-norms, for 1≀p<∞1 \leq p < \infty. While studying these properties, the intrinsic connection to the minimal trace, which naturally appears in the assignment problem, is also established.Comment: 16 pages, 2 figure

    Lagrangian Numerical Methods for Ocean Biogeochemical Simulations

    Full text link
    We propose two closely--related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the P\'eclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection--reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects

    Robust Inference of Manifold Density and Geometry by Doubly Stochastic Scaling

    Full text link
    The Gaussian kernel and its traditional normalizations (e.g., row-stochastic) are popular approaches for assessing similarities between data points, commonly used for manifold learning and clustering, as well as supervised and semi-supervised learning on graphs. In many practical situations, the data can be corrupted by noise that prohibits traditional affinity matrices from correctly assessing similarities, especially if the noise magnitudes vary considerably across the data, e.g., under heteroskedasticity or outliers. An alternative approach that provides a more stable behavior under noise is the doubly stochastic normalization of the Gaussian kernel. In this work, we investigate this normalization in a setting where points are sampled from an unknown density on a low-dimensional manifold embedded in high-dimensional space and corrupted by possibly strong, non-identically distributed, sub-Gaussian noise. We establish the pointwise concentration of the doubly stochastic affinity matrix and its scaling factors around certain population forms. We then utilize these results to develop several tools for robust inference. First, we derive a robust density estimator that can substantially outperform the standard kernel density estimator under high-dimensional noise. Second, we provide estimators for the pointwise noise magnitudes, the pointwise signal magnitudes, and the pairwise Euclidean distances between clean data points. Lastly, we derive robust graph Laplacian normalizations that approximate popular manifold Laplacians, including the Laplace Beltrami operator, showing that the local geometry of the manifold can be recovered under high-dimensional noise. We exemplify our results in simulations and on real single-cell RNA-sequencing data. In the latter, we show that our proposed normalizations are robust to technical variability associated with different cell types

    Designing structured tight frames via an alternating projection method

    Get PDF
    Tight frames, also known as general Welch-bound- equality sequences, generalize orthonormal systems. Numerous applications - including communications, coding, and sparse approximation- require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem. To apply this method, one needs only to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is the fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate that alternating projection is an effective tool for frame design, the paper studies some important structural properties in detail. First, it addresses the most basic design problem: constructing tight frames with prescribed vector norms. Then, it discusses equiangular tight frames, which are natural dictionaries for sparse approximation. Finally, it examines tight frames whose individual vectors have low peak-to-average-power ratio (PAR), which is a valuable property for code-division multiple-access (CDMA) applications. Numerical experiments show that the proposed algorithm succeeds in each of these three cases. The appendices investigate the convergence properties of the algorithm

    A Symmetry Preserving Algorithm for Matrix Scaling

    Get PDF
    International audienceWe present an iterative algorithm which asymptotically scales the ∞\infty-norm of each row and each column of a matrix to one. This scaling algorithm preserves symmetry of the original matrix and shows fast linear convergence with an asymptotic rate of 1/21/2. We discuss extensions of the algorithm to the one-norm, and by inference to other norms. For the 1-norm case, we show again that convergence is linear, with the rate dependent on the spectrum of the scaled matrix. We demonstrate experimentally that the scaling algorithm improves the conditioning of the matrix and that it helps direct solvers by reducing the need for pivoting. In particular, for symmetric matrices the theoretical and experimental results highlight the potential of the proposed algorithm over existing alternatives.Nous dĂ©crivons un algorithme itĂ©ratif qui, asymptotiquement, met une matrice Ă  l'Ă©chelle de telle sorte que chaque ligne et chaque colonne est de taille 1 dans la norme infini. Cet algorithme prĂ©serve la symĂ©trie. De plus, il converge assez rapidement avec un taux asymptotique de 1/2. Nous discutons la gĂ©nĂ©ralisation de l'algorithme Ă  la norme 1 et, par infĂ©rence, Ă  d'autres normes. Pour le cas de la norme 1, nous Ă©tablissons que l'algorithme converge avec un taux linĂ©aire. Nous dĂ©montrons expĂ©rimentalement que notre algorithme amĂ©liore le conditionnement de la matrice et qu'il aide les mĂ©thodes directes de rĂ©solution en rĂ©duisant le pivotage. ParticuliĂšrement pour des matrices symĂ©triques, nos rĂ©sultats thĂ©oriques et expĂ©rimentaux mettent en valeur l'intĂ©rĂȘt de notre algorithme par rapport aux algorithmes existants
    • 

    corecore