44 research outputs found

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Generative models, linguistic communication and active inference

    Get PDF
    This paper presents a biologically plausible generative model and inference scheme that is capable of simulating communication between synthetic subjects who talk to each other. Building on active inference formulations of dyadic interactions, we simulate linguistic exchange to explore generative models that support dialogues. These models employ high-order interactions among abstract (discrete) states in deep (hierarchical) models. The sequential nature of language processing mandates generative models with a particular factorial structure—necessary to accommodate the rich combinatorics of language. We illustrate linguistic communication by simulating a synthetic subject who can play the ‘Twenty Questions’ game. In this game, synthetic subjects take the role of the questioner or answerer, using the same generative model. This simulation setup is used to illustrate some key architectural points and demonstrate that many behavioural and neurophysiological correlates of linguistic communication emerge under variational (marginal) message passing, given the right kind of generative model. For example, we show that theta-gamma coupling is an emergent property of belief updating, when listening to another

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Nonparametric Bayesian methods in robotic vision

    Get PDF
    In this dissertation non-parametric Bayesian methods are used in the application of robotic vision. Robots make use of depth sensors that represent their environment using point clouds. Non-parametric Bayesian methods can (1) determine how good an object is recognized, and (2) determine how many objects a particular scene contains. When there is a model available for the object to be recognized and the nature of perceptual error is known, a Bayesian method will act optimally.In this dissertation Bayesian models are developed to represent geometric objects such as lines and line segments (consisting out of points). The infinite line model and the infinite line segment model use a non-parametric Bayesian model, to be precise, a Dirichlet process, to represent the number of objects. The line or the line segment is represented by a probability distribution. The lines can be represented by conjugate distributions and then Gibbs sampling can be used. The line segments are not represented by conjugate distributions and therefore a split-merge sampler is used.A split-merge sampler fits line segments by assigning points to a hypothetical line segment. Then it proposes splits of a single line segment or merges of two line segments. A new sampler, the triadic split-merge sampler, introduces steps that involve three line segments. In this dissertation, the new sampler is compared to a conventional split-merge sampler. The triadic sampler can be applied to other problems as well, i.e., not only problems in robotic perception.The models for objects can also be learned. In the dissertation this is done for more complex objects, such as cubes, built up out of hundreds of points. An auto-encoder then learns to generate a representative object given the data. The auto-encoder uses a newly defined reconstruction distance, called the partitioning earth mover’s distance. The object that is learned by the auto-encoder is used in a triadic sampler to (1) identify the point cloud objects and to (2) establish multiple occurrences of those objects in the point cloud.Algorithms and the Foundations of Software technolog

    On the use of deep learning for phase recovery

    Full text link
    Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and outlook on how to better use DL to improve the reliability and efficiency in PR. Furthermore, we present a live-updating resource (https://github.com/kqwang/phase-recovery) for readers to learn more about PR.Comment: 82 pages, 32 figure

    An Extreme Value Theory Model of Cross-Modal Sensory Information Integration in Modulation of Vertebrate Visual System Functions

    Get PDF
    We propose a computational model of vision that describes the integration of cross-modal sensory information between the olfactory and visual systems in zebrafish based on the principles of the statistical extreme value theory. The integration of olfacto-retinal information is mediated by the centrifugal pathway that originates from the olfactory bulb and terminates in the neural retina. Motivation for using extreme value theory stems from physiological evidence suggesting that extremes and not the mean of the cell responses direct cellular activity in the vertebrate brain. We argue that the visual system, as measured by retinal ganglion cell responses in spikes/sec, follows an extreme value process for sensory integration and the increase in visual sensitivity from the olfactory input can be better modeled using extreme value distributions. As zebrafish maintains high evolutionary proximity to mammals, our model can be extended to other vertebrates as well
    corecore