304 research outputs found

    Exploring the common concepts of adaptive MCMC and Covariance Matrix Adaptation schemes

    Get PDF

    Global parameter identification of stochastic reaction networks from single trajectories

    Full text link
    We consider the problem of inferring the unknown parameters of a stochastic biochemical network model from a single measured time-course of the concentration of some of the involved species. Such measurements are available, e.g., from live-cell fluorescence microscopy in image-based systems biology. In addition, fluctuation time-courses from, e.g., fluorescence correlation spectroscopy provide additional information about the system dynamics that can be used to more robustly infer parameters than when considering only mean concentrations. Estimating model parameters from a single experimental trajectory enables single-cell measurements and quantification of cell--cell variability. We propose a novel combination of an adaptive Monte Carlo sampler, called Gaussian Adaptation, and efficient exact stochastic simulation algorithms that allows parameter identification from single stochastic trajectories. We benchmark the proposed method on a linear and a non-linear reaction network at steady state and during transient phases. In addition, we demonstrate that the present method also provides an ellipsoidal volume estimate of the viable part of parameter space and is able to estimate the physical volume of the compartment in which the observed reactions take place.Comment: Article in print as a book chapter in Springer's "Advances in Systems Biology

    Poisson-Dirichlet statistics for the extremes of a log-correlated Gaussian field

    Full text link
    We study the statistics of the extremes of a discrete Gaussian field with logarithmic correlations at the level of the Gibbs measure. The model is defined on the periodic interval [0,1][0,1], and its correlation structure is nonhierarchical. It is based on a model introduced by Bacry and Muzy [Comm. Math. Phys. 236 (2003) 449-475] (see also Barral and Mandelbrot [Probab. Theory Related Fields 124 (2002) 409-430]), and is similar to the logarithmic Random Energy Model studied by Carpentier and Le Doussal [Phys. Rev. E (3) 63 (2001) 026110] and more recently by Fyodorov and Bouchaud [J. Phys. A 41 (2008) 372001]. At low temperature, it is shown that the normalized covariance of two points sampled from the Gibbs measure is either 00 or 11. This is used to prove that the joint distribution of the Gibbs weights converges in a suitable sense to that of a Poisson-Dirichlet variable. In particular, this proves a conjecture of Carpentier and Le Doussal that the statistics of the extremes of the log-correlated field behave as those of i.i.d. Gaussian variables and of branching Brownian motion at the level of the Gibbs measure. The method of proof is robust and is adaptable to other log-correlated Gaussian fields.Comment: Published in at http://dx.doi.org/10.1214/13-AAP952 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Variable Metric Random Pursuit

    Full text link
    We consider unconstrained randomized optimization of smooth convex objective functions in the gradient-free setting. We analyze Random Pursuit (RP) algorithms with fixed (F-RP) and variable metric (V-RP). The algorithms only use zeroth-order information about the objective function and compute an approximate solution by repeated optimization over randomly chosen one-dimensional subspaces. The distribution of search directions is dictated by the chosen metric. Variable Metric RP uses novel variants of a randomized zeroth-order Hessian approximation scheme recently introduced by Leventhal and Lewis (D. Leventhal and A. S. Lewis., Optimization 60(3), 329--245, 2011). We here present (i) a refined analysis of the expected single step progress of RP algorithms and their global convergence on (strictly) convex functions and (ii) novel convergence bounds for V-RP on strongly convex functions. We also quantify how well the employed metric needs to match the local geometry of the function in order for the RP algorithms to converge with the best possible rate. Our theoretical results are accompanied by numerical experiments, comparing V-RP with the derivative-free schemes CMA-ES, Implicit Filtering, Nelder-Mead, NEWUOA, Pattern-Search and Nesterov's gradient-free algorithms.Comment: 42 pages, 6 figures, 15 tables, submitted to journal, Version 3: majorly revised second part, i.e. Section 5 and Appendi

    On the Geometry of Maximum Entropy Problems

    Full text link
    We show that a simple geometric result suffices to derive the form of the optimal solution in a large class of finite and infinite-dimensional maximum entropy problems concerning probability distributions, spectral densities and covariance matrices. These include Burg's spectral estimation method and Dempster's covariance completion, as well as various recent generalizations of the above. We then apply this orthogonality principle to the new problem of completing a block-circulant covariance matrix when an a priori estimate is available.Comment: 22 page

    Regularized Optimal Transport and the Rot Mover's Distance

    Full text link
    This paper presents a unified framework for smooth convex regularization of discrete optimal transport problems. In this context, the regularized optimal transport turns out to be equivalent to a matrix nearness problem with respect to Bregman divergences. Our framework thus naturally generalizes a previously proposed regularization based on the Boltzmann-Shannon entropy related to the Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We call the regularized optimal transport distance the rot mover's distance in reference to the classical earth mover's distance. We develop two generic schemes that we respectively call the alternate scaling algorithm and the non-negative alternate scaling algorithm, to compute efficiently the regularized optimal plans depending on whether the domain of the regularizer lies within the non-negative orthant or not. These schemes are based on Dykstra's algorithm with alternate Bregman projections, and further exploit the Newton-Raphson method when applied to separable divergences. We enhance the separable case with a sparse extension to deal with high data dimensions. We also instantiate our proposed framework and discuss the inherent specificities for well-known regularizers and statistical divergences in the machine learning and information geometry communities. Finally, we demonstrate the merits of our methods with experiments using synthetic data to illustrate the effect of different regularizers and penalties on the solutions, as well as real-world data for a pattern recognition application to audio scene classification

    Differential Evolution with Population and Strategy Parameter Adaptation

    Get PDF
    Differential evolution (DE) is simple and effective in solving numerous real-world global optimization problems. However, its effectiveness critically depends on the appropriate setting of population size and strategy parameters. Therefore, to obtain optimal performance the time-consuming preliminary tuning of parameters is needed. Recently, different strategy parameter adaptation techniques, which can automatically update the parameters to appropriate values to suit the characteristics of optimization problems, have been proposed. However, most of the works do not control the adaptation of the population size. In addition, they try to adapt each strategy parameters individually but do not take into account the interaction between the parameters that are being adapted. In this paper, we introduce a DE algorithm where both strategy parameters are self-adapted taking into account the parameter dependencies by means of a multivariate probabilistic technique based on Gaussian Adaptation working on the parameter space. In addition, the proposed DE algorithm starts by sampling a huge number of sample solutions in the search space and in each generation a constant number of individuals from huge sample set are adaptively selected to form the population that evolves. The proposed algorithm is evaluated on 14 benchmark problems of CEC 2005 with different dimensionality

    Flexible methods for blind separation of complex signals

    Get PDF
    One of the main matter in Blind Source Separation (BSS) performed with a neural network approach is the choice of the nonlinear activation function (AF). In fact if the shape of the activation function is chosen as the cumulative density function (c.d.f.) of the original source the problem is solved. For this scope in this thesis a flexible approach is introduced and the shape of the activation functions is changed during the learning process using the so-called “spline functions”. The problem is complicated in the case of separation of complex sources where there is the problem of the dichotomy between analyticity and boundedness of the complex activation functions. The problem is solved introducing the “splitting function” model as activation function. The “splitting function” is a couple of “spline function” which wind off the real and the imaginary part of the complex activation function, each of one depending from the real and imaginary variable. A more realistic model is the “generalized splitting function”, which is formed by a couple of two bi-dimensional functions (surfaces), one for the real and one for the imaginary part of the complex function, each depending by both the real and imaginary part of the complex variable. Unfortunately the linear environment is unrealistic in many practical applications. In this way there is the need of extending BSS problem in the nonlinear environment: in this case both the activation function than the nonlinear distorting function are realized by the “splitting function” made of “spline function”. The complex and instantaneous separation in linear and nonlinear environment allow us to perform a complex-valued extension of the well-known INFOMAX algorithm in several practical situations, such as convolutive mixtures, fMRI signal analysis and bandpass signal transmission. In addition advanced characteristics on the proposed approach are introduced and deeply described. First of all it is shows as splines are universal nonlinear functions for BSS problem: they are able to perform separation in anyway. Then it is analyzed as the “splitting solution” allows the algorithm to obtain a phase recovery: usually there is a phase ambiguity. Finally a Cramér-Rao lower bound for ICA is discussed. Several experimental results, tested by different objective indexes, show the effectiveness of the proposed approaches
    • …
    corecore