15 research outputs found

    Implementing Bayesian Networks with Embedded Stochastic MRAM

    Full text link
    Magnetic tunnel junctions (MTJ's) with low barrier magnets have been used to implement random number generators (RNG's) and it has recently been shown that such an MTJ connected to the drain of a conventional transistor provides a three-terminal tunable RNG or a pp-bit. In this letter we show how this pp-bit can be used to build a pp-circuit that emulates a Bayesian network (BN), such that the correlations in real world variables can be obtained from electrical measurements on the corresponding circuit nodes. The pp-circuit design proceeds in two steps: the BN is first translated into a behavioral model, called Probabilistic Spin Logic (PSL), defined by dimensionless biasing (h) and interconnection (J) coefficients, which are then translated into electronic circuit elements. As a benchmark example, we mimic a family tree of three generations and show that the genetic relatedness calculated from a SPICE-compatible circuit simulator matches well-known results

    A bio-plausible design for visual attitude stabilization

    Get PDF
    We consider the problem of attitude stabilization using exclusively visual sensory input, and we look for a solution which can satisfy the constraints of a "bio-plausible" computation. We obtain a PD controller which is a bilinear form of the goal image, and the current and delayed visual input. Moreover, this controller can be learned using classic neural networks algorithms. The structure of the resulting computation, derived from general principles by imposing a bilinear computation, has striking resemblances with existing models for visual information processing in insects (Reichardt Correlators and lobula plate tangential cells). We validate the algorithms using faithful simulations of the fruit fly visual input

    Algorithmic Foundations of Inexact Computing

    Full text link
    Inexact computing also referred to as approximate computing is a style of designing algorithms and computing systems wherein the accuracy of correctness of algorithms executing on them is deliberately traded for significant resource savings. Significant progress has been reported in this regard both in terms of hardware as well as software or custom algorithms that exploited this approach resulting in some loss in solution quality (accuracy) while garnering disproportionately high savings. However, these approaches tended to be ad-hoc and were tied to specific algorithms and technologies. Consequently, a principled approach to designing and analyzing algorithms was lacking. In this paper, we provide a novel model which allows us to characterize the behavior of algorithms designed to be inexact, as well as characterize opportunities and benefits that this approach offers. Our methods therefore are amenable to standard asymptotic analysis and provides a clean unified abstraction through which an algorithm's design and analysis can be conducted. With this as a backdrop, we show that inexactness can be significantly beneficial for some fundamental problems in that the quality of a solution can be exponentially better if one exploits inexactness when compared to approaches that are agnostic and are unable to exploit this approach. We show that such gains are possible in the context of evaluating Boolean functions rooted in the theory of Boolean functions and their spectra, PAC learning, and sorting. Formally, this is accomplished by introducing the twin concepts of inexactness aware and inexactness oblivious approaches to designing algorithms and the exponential gains are shown in the context of taking the ratio of the quality of the solution using the "aware" approach to the "oblivious" approach

    A bio-plausible design for visual attitude stabilization

    Get PDF
    We consider the problem of attitude stabilization using exclusively visual sensory input, and we look for a solution which can satisfy the constraints of a "bio-plausible" computation. We obtain a PD controller which is a bilinear form of the goal image, and the current and delayed visual input. Moreover, this controller can be learned using classic neural networks algorithms. The structure of the resulting computation, derived from general principles by imposing a bilinear computation, has striking resemblances with existing models for visual information processing in insects (Reichardt Correlators and lobula plate tangential cells). We validate the algorithms using faithful simulations of the fruit fly visual input

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Best-Effort Communication Improves Performance and Scales Robustly on Conventional Hardware

    Full text link
    Here, we test the performance and scalability of fully-asynchronous, best-effort communication on existing, commercially-available HPC hardware. A first set of experiments tested whether best-effort communication strategies can benefit performance compared to the traditional perfect communication model. At high CPU counts, best-effort communication improved both the number of computational steps executed per unit time and the solution quality achieved within a fixed-duration run window. Under the best-effort model, characterizing the distribution of quality of service across processing components and over time is critical to understanding the actual computation being performed. Additionally, a complete picture of scalability under the best-effort model requires analysis of how such quality of service fares at scale. To answer these questions, we designed and measured a suite of quality of service metrics: simulation update period, message latency, message delivery failure rate, and message delivery coagulation. Under a lower communication-intensivity benchmark parameterization, we found that median values for all quality of service metrics were stable when scaling from 64 to 256 process. Under maximal communication intensivity, we found only minor -- and, in most cases, nil -- degradation in median quality of service. In an additional set of experiments, we tested the effect of an apparently faulty compute node on performance and quality of service. Despite extreme quality of service degradation among that node and its clique, median performance and quality of service remained stable
    corecore