32 research outputs found

    The importance of input variables to a neural network fault-diagnostic system for nuclear power plants

    Get PDF
    This thesis explores safety enhancement for nuclear power plants. Emergency response systems currently in use depend mainly on automatic systems engaging when certain parameters go beyond a pre-specified safety limit. Often times the operator has little or no opportunity to react since a fast scram signal shuts down the reactor smoothly and efficiently. These accidents are of interest to technical support personnel since examining the conditions that gave rise to these situations help determine causality. In many other cases an automated fault-diagnostic advisor would be a valuable tool in assisting the technicians and operators to determine what just happened and why

    Towards a continuous dynamic model of the Hopfield theory on neuronal interaction and memory storage

    Get PDF
    The purpose of this work is to study the Hopfield model for neuronal interaction and memory storage, in particular the convergence to the stored patterns. Since the hypothesis of symmetric synapses is not true for the brain, we will study how we can extend it to the case of asymmetric synapses using a probabilistic approach. We then focus on the description of another feature of the memory process and brain: oscillations. Using the Kuramoto model we will be able to describe them completely, gaining the presence of synchronization between neurons. Our aim is therefore to understand how and why neurons can be seen as oscillators and to establish a strong link between this model and the Hopfield approach

    VLSI neural networks for computer vision

    Get PDF

    Augmenting Quantum Mechanics with Artificial Intelligence

    Get PDF
    The simulation of quantum matter with classical hardware plays a central role in the discovery and development of quantum many-body systems, with far-reaching implications in condensed matter physics and quantum technologies. In general, efficient and sophisticated algorithms are required to overcome the severe challenge posed by the exponential scaling of the Hilbert space of quantum systems. In contrast, hardware built with quantum bits of information are inherently capable of efficiently finding solutions of quantum many-body problems. While a universal and scalable quantum computer is still beyond the horizon, recent advances in qubit manufacturing and coherent control of synthetic quantum matter are leading to a new generation of intermediate scale quantum hardware. The complexity underlying quantum many-body systems closely resembles the one encountered in many problems in the world of information and technology. In both contexts, the complexity stems from a large number of interacting degrees of freedom. A powerful strategy in the latter scenario is machine learning, a subfield of artificial intelligence where large amounts of data are used to extract relevant features and patterns. In particular, artificial neural networks have been demonstrated to be capable of discovering low-dimensional representations of complex objects from high-dimensional dataset, leading to the profound technological revolution we all witness in our daily life. In this Thesis, we envision a new paradigm for scientific discovery in quantum physics. On the one hand, we have the essentially unlimited data generated with the increasing amount of highly controllable quantum hardware. On the other hand, we have a set of powerful algorithms that efficiently capture non-trivial correlations from high-dimensional data. Therefore, we fully embrace this data-driven approach to quantum mechanics, and anticipate new exciting possibilities in the field of quantum many-body physics and quantum information science. We revive a powerful stochastic neural network called a restricted Boltzmann machine, which slowly moved out of fashion after playing a central role in the machine learning revolution of the early 2010s. We introduce a neural-network representation of quantum states based on this generative model. We propose a set of algorithms to reconstruct unknown quantum states from measurement data and numerically demonstrate their potential, with important implications for current experiments. These include the reconstruction of experimentally inaccessible properties, such as entanglement, and diagnostics to determine sources of noise. Furthermore, we introduce a machine learning framework for quantum error correction, where a neural network learns the best decoding strategy directly from data. We expect that the full integration between quantum hardware and artificial intelligence will become the gold standard, and will drive the world into the era of fault-tolerant quantum computing and large-scale quantum simulations

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware

    Get PDF
    Artificial neural networks have shown great potential and have attracted much research interest. One problem faced when simulating such networks is speed. As the number of neurons increases, the time to simulate and train a network increases dramatically. This makes it difficult to simulate and train a large-scale network system without the support of a high-performance computer system. The solution we present is a "real" parallel system - using a parallel machine to simulate neural networks which are intrinsically parallel applications. SpiNNaker is a scalable massively-parallel computing system under development with the aim of building a general-purpose platform for the parallel simulation of large-scale neural systems. This research investigates how to model large-scale neural networks efficiently on such a parallel machine. While providing increased overall computational power, a parallel architecture introduces a new problem - the increased communication reduces the speedup gains. Modeling schemes, which take into account communication, processing, and storage requirements, are investigated to solve this problem. Since modeling schemes are application-dependent, two different types of neural network are examined - spiking neural networks with spike-time dependent plasticity, and the parallel distributed processing model with the backpropagation learning rule. Different modeling schemes are developed and evaluated for the two types of neural network. The research shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks. The linear scalability shown in this architecture provides a path to the further development of parallel solutions for the simulation of extremely large-scale neural networks.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Design space exploration of associative memories using spiking neurons with respect to neuromorphic hardware implementations

    Get PDF
    Stöckel A. Design space exploration of associative memories using spiking neurons with respect to neuromorphic hardware implementations. Bielefeld: Universität Bielefeld; 2016.Artificial neural networks are well-established models for key functions of biological brains, such as low-level sensory processing and memory. In particular, networks of artificial spiking neurons emulate the time dynamics, high parallelisation and asynchronicity of their biological counterparts. Large scale hardware simulators for such networks – _neuromorphic_ computers – are developed as part of the Human Brain Project, with the ultimate goal to gain insights regarding the neural foundations of cognitive processes. In this thesis, we focus on one key cognitive function of biological brains, associative memory. We implement the well-understood Willshaw model for artificial spiking neural networks, thoroughly explore the design space for the implementation, provide fast design space exploration software and evaluate our implementation in software simulation as well as neuromorphic hardware. Thereby we provide an approach to manually or automatically infer viable parameters for an associative memory on different hardware and software platforms. The performance of the associative memory was found to vary significantly between individual neuromorphic hardware platforms and numerical simulations. The network is thus a suitable benchmark for neuromorphic systems

    Engineering derivatives from biological systems for advanced aerospace applications

    Get PDF
    The present study consisted of a literature survey, a survey of researchers, and a workshop on bionics. These tasks produced an extensive annotated bibliography of bionics research (282 citations), a directory of bionics researchers, and a workshop report on specific bionics research topics applicable to space technology. These deliverables are included as Appendix A, Appendix B, and Section 5.0, respectively. To provide organization to this highly interdisciplinary field and to serve as a guide for interested researchers, we have also prepared a taxonomy or classification of the various subelements of natural engineering systems. Finally, we have synthesized the results of the various components of this study into a discussion of the most promising opportunities for accelerated research, seeking solutions which apply engineering principles from natural systems to advanced aerospace problems. A discussion of opportunities within the areas of materials, structures, sensors, information processing, robotics, autonomous systems, life support systems, and aeronautics is given. Following the conclusions are six discipline summaries that highlight the potential benefits of research in these areas for NASA's space technology programs
    corecore