40,591 research outputs found

    A comparative study of two stochastic mode reduction methods

    Full text link
    We present a comparative study of two methods for the reduction of the dimensionality of a system of ordinary differential equations that exhibits time-scale separation. Both methods lead to a reduced system of stochastic differential equations. The novel feature of these methods is that they allow the use, in the reduced system, of higher order terms in the resolved variables. The first method, proposed by Majda, Timofeyev and Vanden-Eijnden, is based on an asymptotic strategy developed by Kurtz. The second method is a short-memory approximation of the Mori-Zwanzig projection formalism of irreversible statistical mechanics, as proposed by Chorin, Hald and Kupferman. We present conditions under which the reduced models arising from the two methods should have similar predictive ability. We apply the two methods to test cases that satisfy these conditions. The form of the reduced models and the numerical simulations show that the two methods have similar predictive ability as expected.Comment: 35 pages, 6 figures. Under review in Physica

    The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

    Get PDF
    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex

    Dimension reduction for systems with slow relaxation

    Full text link
    We develop reduced, stochastic models for high dimensional, dissipative dynamical systems that relax very slowly to equilibrium and can encode long term memory. We present a variety of empirical and first principles approaches for model reduction, and build a mathematical framework for analyzing the reduced models. We introduce the notions of universal and asymptotic filters to characterize `optimal' model reductions for sloppy linear models. We illustrate our methods by applying them to the practically important problem of modeling evaporation in oil spills.Comment: 48 Pages, 13 figures. Paper dedicated to the memory of Leo Kadanof

    On Weighted Multivariate Sign Functions

    Full text link
    Multivariate sign functions are often used for robust estimation and inference. We propose using data dependent weights in association with such functions. The proposed weighted sign functions retain desirable robustness properties, while significantly improving efficiency in estimation and inference compared to unweighted multivariate sign-based methods. Using weighted signs, we demonstrate methods of robust location estimation and robust principal component analysis. We extend the scope of using robust multivariate methods to include robust sufficient dimension reduction and functional outlier detection. Several numerical studies and real data applications demonstrate the efficacy of the proposed methodology.Comment: Keywords: Multivariate sign, Principal component analysis, Data depth, Sufficient dimension reductio

    High-Performance Passive Macromodeling Algorithms for Parallel Computing Platforms

    Get PDF
    This paper presents a comprehensive strategy for fast generation of passive macromodels of linear devices and interconnects on parallel computing hardware. Starting from a raw characterization of the structure in terms of frequency-domain tabulated scattering responses, we perform a rational curve fitting and a postprocessing passivity enforcement. Both algorithms are parallelized and cast in a form that is suitable for deployment on shared-memory multicore platforms. Particular emphasis is placed on the passivity characterization step, which is performed using two complementary strategies. The first uses an iterative restarted and deflated rational Arnoldi process to extract the imaginary Hamiltonian eigenvalues associated with the model. The second is based on an accuracy-controlled adaptive sampling. Various parallelization strategies are discussed for both schemes, with particular care on load balancing between different computing threads and memory occupation. The resulting parallel macromodeling flow is demonstrated on a number of medium- and large-scale structures, showing good scalability up to 16 computational core
    • ā€¦
    corecore