525 research outputs found

    On microelectronic self-learning cognitive chip systems

    Get PDF
    After a brief review of machine learning techniques and applications, this Ph.D. thesis examines several approaches for implementing machine learning architectures and algorithms into hardware within our laboratory. From this interdisciplinary background support, we have motivations for novel approaches that we intend to follow as an objective of innovative hardware implementations of dynamically self-reconfigurable logic for enhanced self-adaptive, self-(re)organizing and eventually self-assembling machine learning systems, while developing this new particular area of research. And after reviewing some relevant background of robotic control methods followed by most recent advanced cognitive controllers, this Ph.D. thesis suggests that amongst many well-known ways of designing operational technologies, the design methodologies of those leading-edge high-tech devices such as cognitive chips that may well lead to intelligent machines exhibiting conscious phenomena should crucially be restricted to extremely well defined constraints. Roboticists also need those as specifications to help decide upfront on otherwise infinitely free hardware/software design details. In addition and most importantly, we propose these specifications as methodological guidelines tightly related to ethics and the nowadays well-identified workings of the human body and of its psyche

    Analytical Modeling of a Communication Channel Based on Subthreshold Stimulation of Neurobiological Networks

    Get PDF
    The emergence of wearable and implantable machines manufactured artificially or synthesized biologically opens up a new horizon for patient-centered health services such as medical treatment, health monitoring, and rehabilitation with minimized costs and maximized popularity when provided remotely via the Internet. In particular, a swarm of machines at the scale of a single cell down to the nanoscale can be deployed in the body by the non-invasive or minimally invasive operation (e.g., swallowing and injection respectively) to perform various tasks. However, an individual machine is only able to perform basic tasks so it needs to exchange data with the others and outside world through an efficient and reliable communication infrastructure to coordinate and aggregate their functionalities. We introduce in this thesis Neuronal Communication (NC) as a novel paradigm for utilizing the nervous system \emph{in vivo} as a communication medium to transmit artificial data across the body. NC features body-wide communication coverage while it demands zero investment cost on the infrastructure, does not rely on any external energy source, and exposes the body to zero electromagnetic radiation. n addition, unlike many conventional body area networking techniques, NC is able to provide communication among manufactured electronic machines and biologically engineered ones at the same time. We provide a detailed discussion of the theoretical and practical aspects of designing and implementing distinct paradigms of NC. We also discuss NC future perspectives and open challenges. Adviser: Massimiliano Pierobo

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Fast and Accurate Deep Learning Framework for Secure Fault Diagnosis in the Industrial Internet of Things

    Get PDF
    This paper introduced a new deep learning framework for fault diagnosis in electrical power systems. The framework integrates the convolution neural network and different regression models to visually identify which faults have occurred in electric power systems. The approach includes three main steps, data preparation, object detection, and hyper-parameter optimization. Inspired by deep learning, evolutionary computation techniques, different strategies have been proposed in each step of the process. In addition, we propose a new hyper-parameters optimization model based on evolutionary computation that can be used to tune parameters of our deep learning framework. In the validation of the framework’s usefulness, experimental evaluation is executed using the well known and challenging VOC 2012, the COCO datasets, and the large NESTA 162-bus system. The results show that our proposed approach significantly outperforms most of the existing solutions in terms of runtime and accuracy.acceptedVersio

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
    • …
    corecore