6,470 research outputs found

    Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization

    Full text link
    In this paper, we present a new stochastic algorithm, namely the stochastic block mirror descent (SBMD) method for solving large-scale nonsmooth and stochastic optimization problems. The basic idea of this algorithm is to incorporate the block-coordinate decomposition and an incremental block averaging scheme into the classic (stochastic) mirror-descent method, in order to significantly reduce the cost per iteration of the latter algorithm. We establish the rate of convergence of the SBMD method along with its associated large-deviation results for solving general nonsmooth and stochastic optimization problems. We also introduce different variants of this method and establish their rate of convergence for solving strongly convex, smooth, and composite optimization problems, as well as certain nonconvex optimization problems. To the best of our knowledge, all these developments related to the SBMD methods are new in the stochastic optimization literature. Moreover, some of our results also seem to be new for block coordinate descent methods for deterministic optimization

    Linearly Convergent First-Order Algorithms for Semi-definite Programming

    Full text link
    In this paper, we consider two formulations for Linear Matrix Inequalities (LMIs) under Slater type constraint qualification assumption, namely, SDP smooth and non-smooth formulations. We also propose two first-order linearly convergent algorithms for solving these formulations. Moreover, we introduce a bundle-level method which converges linearly uniformly for both smooth and non-smooth problems and does not require any smoothness information. The convergence properties of these algorithms are also discussed. Finally, we consider a special case of LMIs, linear system of inequalities, and show that a linearly convergent algorithm can be obtained under a weaker assumption

    A machine learning approach to explore the spectra intensity pattern of peptides using tandem mass spectrometry data

    Get PDF
    Background: A better understanding of the mechanisms involved in gas-phase fragmentation of peptides is essential for the development of more reliable algorithms for high-throughput protein identification using mass spectrometry (MS). Current methodologies depend predominantly on the use of derived m/z values of fragment ions, and, the knowledge provided by the intensity information present in MS/MS spectra has not been fully exploited. Indeed spectrum intensity information is very rarely utilized in the algorithms currently in use for high-throughput protein identification. Results: In this work, a Bayesian neural network approach is employed to analyze ion intensity information present in 13878 different MS/MS spectra. The influence of a library of 35 features on peptide fragmentation is examined under different proton mobility conditions. Useful rules involved in peptide fragmentation are found and subsets of features which have significant influence on fragmentation pathway of peptides are characterised. An intensity model is built based on the selected features and the model can make an accurate prediction of the intensity patterns for given MS/MS spectra. The predictions include not only the mean values of spectra intensity but also the variances that can be used to tolerate noises and system biases within experimental MS/MS spectra. Conclusion: The intensity patterns of fragmentation spectra are informative and can be used to analyze the influence of various characteristics of fragmented peptides on their fragmentation pathway. The features with significant influence can be used in turn to predict spectra intensities. Such information can help develop more reliable algorithms for peptide and protein identification

    Linearized Asymptotic Stability for Fractional Differential Equations

    Get PDF
    We prove the theorem of linearized asymptotic stability for fractional differential equations. More precisely, we show that an equilibrium of a nonlinear Caputo fractional differential equation is asymptotically stable if its linearization at the equilibrium is asymptotically stable. As a consequence we extend Lyapunov's first method to fractional differential equations by proving that if the spectrum of the linearization is contained in the sector \{\lambda \in \C : |\arg \lambda| > \frac{\alpha \pi}{2}\} where α>0\alpha > 0 denotes the order of the fractional differential equation, then the equilibrium of the nonlinear fractional differential equation is asymptotically stable
    corecore