20,276 research outputs found

    An improved generalized inverse algorithm for linear inequalities and its applications

    Get PDF
    Iterative, two-class algorithm for linear inequalitie

    An improved multi-parametric programming algorithm for flux balance analysis of metabolic networks

    Full text link
    Flux balance analysis has proven an effective tool for analyzing metabolic networks. In flux balance analysis, reaction rates and optimal pathways are ascertained by solving a linear program, in which the growth rate is maximized subject to mass-balance constraints. A variety of cell functions in response to environmental stimuli can be quantified using flux balance analysis by parameterizing the linear program with respect to extracellular conditions. However, for most large, genome-scale metabolic networks of practical interest, the resulting parametric problem has multiple and highly degenerate optimal solutions, which are computationally challenging to handle. An improved multi-parametric programming algorithm based on active-set methods is introduced in this paper to overcome these computational difficulties. Degeneracy and multiplicity are handled, respectively, by introducing generalized inverses and auxiliary objective functions into the formulation of the optimality conditions. These improvements are especially effective for metabolic networks because their stoichiometry matrices are generally sparse; thus, fast and efficient algorithms from sparse linear algebra can be leveraged to compute generalized inverses and null-space bases. We illustrate the application of our algorithm to flux balance analysis of metabolic networks by studying a reduced metabolic model of Corynebacterium glutamicum and a genome-scale model of Escherichia coli. We then demonstrate how the critical regions resulting from these studies can be associated with optimal metabolic modes and discuss the physical relevance of optimal pathways arising from various auxiliary objective functions. Achieving more than five-fold improvement in computational speed over existing multi-parametric programming tools, the proposed algorithm proves promising in handling genome-scale metabolic models.Comment: Accepted in J. Optim. Theory Appl. First draft was submitted on August 4th, 201

    Super-resolution, Extremal Functions and the Condition Number of Vandermonde Matrices

    Get PDF
    Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (mm) and the separation (Δ\Delta). If m>1/Δ+1m > 1/\Delta + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m<(1ϵ)/Δm < (1-\epsilon) /\Delta no estimator can distinguish between a particular pair of Δ\Delta-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between {\em extremal functions} and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing.Comment: 19 page

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
    corecore