8,936 research outputs found

    Stochastic Testing Simulator for Integrated Circuits and MEMS: Hierarchical and Sparse Techniques

    Get PDF
    Process variations are a major concern in today's chip design since they can significantly degrade chip performance. To predict such degradation, existing circuit and MEMS simulators rely on Monte Carlo algorithms, which are typically too slow. Therefore, novel fast stochastic simulators are highly desired. This paper first reviews our recently developed stochastic testing simulator that can achieve speedup factors of hundreds to thousands over Monte Carlo. Then, we develop a fast hierarchical stochastic spectral simulator to simulate a complex circuit or system consisting of several blocks. We further present a fast simulation approach based on anchored ANOVA (analysis of variance) for some design problems with many process variations. This approach can reduce the simulation cost and can identify which variation sources have strong impacts on the circuit's performance. The simulation results of some circuit and MEMS examples are reported to show the effectiveness of our simulatorComment: Accepted to IEEE Custom Integrated Circuits Conference in June 2014. arXiv admin note: text overlap with arXiv:1407.302

    The Topology of Probability Distributions on Manifolds

    Full text link
    Let PP be a set of nn random points in RdR^d, generated from a probability measure on a mm-dimensional manifold MRdM \subset R^d. In this paper we study the homology of U(P,r)U(P,r) -- the union of dd-dimensional balls of radius rr around PP, as nn \to \infty, and r0r \to 0. In addition we study the critical points of dPd_P -- the distance function from the set PP. These two objects are known to be related via Morse theory. We present limit theorems for the Betti numbers of U(P,r)U(P,r), as well as for number of critical points of index kk for dPd_P. Depending on how fast rr decays to zero as nn grows, these two objects exhibit different types of limiting behavior. In one particular case (nrm>Clognn r^m > C \log n), we show that the Betti numbers of U(P,r)U(P,r) perfectly recover the Betti numbers of the original manifold MM, a result which is of significant interest in topological manifold learning

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms

    Get PDF
    The benefits of automating design cycles for Bayesian inference-based algorithms are becoming increasingly recognized by the machine learning community. As a result, interest in probabilistic programming frameworks has much increased over the past few years. This paper explores a specific probabilistic programming paradigm, namely message passing in Forney-style factor graphs (FFGs), in the context of automated design of efficient Bayesian signal processing algorithms. To this end, we developed "ForneyLab" (https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message passing-based inference in FFGs. We show by example how ForneyLab enables automatic derivation of Bayesian signal processing algorithms, including algorithms for parameter estimation and model comparison. Crucially, due to the modular makeup of the FFG framework, both the model specification and inference methods are readily extensible in ForneyLab. In order to test this framework, we compared variational message passing as implemented by ForneyLab with automatic differentiation variational inference (ADVI) and Monte Carlo methods as implemented by state-of-the-art tools "Edward" and "Stan". In terms of performance, extensibility and stability issues, ForneyLab appears to enjoy an edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate Reasonin

    Fixed Effects and Variance Components Estimation in Three-Level Meta-Analysis

    Get PDF
    Meta-analytic methods have been widely applied to education, medicine, and the social sciences. Much of meta-analytic data are hierarchically structured since effect size estimates are nested within studies, and in turn studies can be nested within level-3 units such as laboratories or investigators, and so forth. Thus, multilevel models are a natural framework for analyzing meta-analytic data. This paper discusses the application of a Fisher scoring method in two- and three-level meta-analysis that takes into account random variation at the second and at the third levels. The usefulness of the model is demonstrated using data that provide information about school calendar types. SAS proc mixed and HLM can be used to compute the estimates of fixed effects and variance components.meta-analysis, multilevel models, random effects

    Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    Get PDF
    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program

    Probabilistically Accurate Program Transformations

    Get PDF
    18th International Symposium, SAS 2011, Venice, Italy, September 14-16, 2011. ProceedingsThe standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form ℙ(|D| ≥ B) ≤ ε, ε ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and ε is the maximum acceptable probability of observing large |D|. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns.National Science Foundation (U.S.) (Grant CCF-0811397)National Science Foundation (U.S.) (Grant CCF-0905244)National Science Foundation (U.S.) (Grant CCF-1036241)National Science Foundation (U.S.) (Grant IIS-0835652)United States. Dept. of Energy (Grant DE-SC0005288
    corecore