437 research outputs found

    Emergent Computations in Trained Artificial Neural Networks and Real Brains

    Full text link
    Synaptic plasticity allows cortical circuits to learn new tasks and to adapt to changing environments. How do cortical circuits use plasticity to acquire functions such as decision-making or working memory? Neurons are connected in complex ways, forming recurrent neural networks, and learning modifies the strength of their connections. Moreover, neurons communicate emitting brief discrete electric signals. Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories, and how computations emerge in the trained networks. Surprisingly, artificial networks and real brains can use similar computational strategies.Comment: International Summer School on Intelligent Signal Processing for Frontier Research and Industry, INFIERI 2021. Universidad Aut\'onoma de Madrid, Madrid, Spain. 23 August - 4 September 202

    Design and computational aspects of compliant tensegrity robots

    Get PDF

    単層カーボンナノチューブ/ポルフィリン-ポリ酸ランダムネットワークを用いたマテリアルリザバー演算素子 —次世代機械知能への新規アプローチ

    Get PDF
    In a layman’s term, computation is defined as the execution of a given instruction through a programmable algorithm. History has it that starting from the simplest calculator to the sophisticated von Neumann machine, the above definition has been followed without a flaw. Logical operations for which a human takes a minute long to solve, is a matter of fraction of seconds for these gadgets. But contrastingly, when it comes to critical and analytical thinking that requires learning through observation like the human brain, these powerful machines falter and lag behind. Thus, inspired from the brain’s neural circuit, software models of neural networks (NN) integrated with high-speed supercomputers were developed as an alternative tool to implement machine intelligent tasks of function optimization, pattern, and voice recognition. But as device downscaling and transistor performance approaches the constant regime of Moore’s law due to high CMOS fabrication cost and large tunneling energy loss, training these algorithms over multiple hidden layers is turning out to be a grave concern for future applications. As a result, the interplay between faster performance and low computational power requirement for complex tasks deems highly disproportional. Therefore, alternative in terms of both NN models and conventional Neumann architecture needs to be addressed in today’s age for next-generation machine intelligence systems. Fortunately, through extensive research and studies, unconventional computing using a reservoir based neural network platform, called in-materio reservoir computing (RC) has come to the rescue. In-maerio RC uses physical, biological, chemical, cellular automata and other inanimate dynamical systems as a source of non-linear high dimensional spatio-temporal information processing unit to construct a specific target task. RC not only has a three-layer simplified neural architectural layer, but also imposes a cheap, fast, and simplified optimization of only the readout weights with machine intelligent regression algorithm to construct the supervised objective target via a weighted linear combination of the readouts. Thus, utilizing this idea, herein in this work we report such an in-materio RC with a dynamical random network of single walled carbon nanotube/porphyrin-polyoxometalate (SWNT/Por-POM) device. We begin with Chapter 1, which deals with the introduction covering the literature of ANN evolution and the shortcomings of von Neumann architecture and training models of these ANN, which leads us to adopt the in-materio RC architecture. We design the problem statement focused on extending the theoretical RC model of previously suggested SWNT/POM network to an experimental one and present the objective of fabricating a random network based on nanomaterials as they closely resemble the network structure of the brain. Finally, we conclude by stating the scope of this research work aiming towards validating the non-linear high dimensional reservoir property SWNT/Por-POM holds for it to explicitly demonstrate the RC benchmark tasks of optimization and classification. Chapter 2 describes the methodology including the chemical repository required for the facile synthesis of the material. The synthesis part is divided broadly into SWNT purification and then its dispersion with Por-POM to form the desired complex. It is then followed up with the microelectrode array fabrication and the consequent wet-transfer thin film deposition to give the ultimate reservoir architecture of input-output control read pads with SWNT/Por-POM reservoir. Finally we give a briefing of AFM, UV-Vis spectroscopy, FE-SEM characterization techniques of SWNT/Por-POM complex along with the electrical set-up interfaced with software algorithm to demonstrate the RC approach of in-materio machine intelligence. In Chapter 3, we study the current dynamics as a function of voltage and time and validate the non-linear information processing ability intrinsic to the device. The study reveals that the negative differential resistance (NDR) arising from redox nature of Por-POM results in oscillating random noise outputs giving rise to 1/f brain-like spatio-temporal information. We compute the memory capacity (MC) and prove that the device exhibits echo state property of fading memory, but remembers very little of the past information. The low MC and high non-linearity allowed us to choose mostly non-linear tasks of waveform generation, Boolean logic optimization and one-hot vector binary object classification as the RC benchmark. The Chapter 4 relates to the waveform generation task. Utilizing the high dimensional voltage readouts of varying amplitude, phase and higher harmonic frequencies, relative to input sine wave, a regression optimization was performed towards constructing cosine, triangular, square and sawtooth waves resulting in a high accuracy of around 95%. The task complexity of function optimization was further enhanced in Chapter 5 where two inputs were used to construct Boolean logic functions of OR, AND, XOR, NOR, NAND and XNOR. Similar to the waveform, accuracy over 95% could be achieved due to the presence of NDR nonlinearity. Furthermore, the device was also tested for classification problem in Chapter 6. Here we showed an off-line binary classification of four object toys; hedgehog, dog, block and bus, using the grasped tactile information of these objects as inputs obtained from the Toyota Human Support Robot. A one-ridge regression analysis to fit the hot vector supervised target was used to optimize the output weights for predicting the correct outcome. All the objects were successfully classified owing to the 1/f information processing factor. Lastly, we conclude the section in Chapter 7 with the future scope of extending the idea to fabricate a 3-D model of the same material as it opens up opportunity for higher memory capacity fruitful for future benchmark tasks of time-series prediction. Overall, our research marks a step stone in utilizing SWNT/Por-POM as the in-materio RC for the very first time thereby making it a desirable candidate for next-generation machine intelligence.九州工業大学博士学位論文 学位記番号:生工博甲第425号 学位授与年月日:令和3年12月27日1 Introduction and Literature review|2 Methodology|3 Reservoir dynamics emerging from an incidental structure of single-walled carbon nanotube/porphyrin-polyoxometalate complex|4 Fourier transform waveforms via in-materio reservoir computing from single-walled carbon nanotube/porphyrin-polyoxometalate complex|5 Room temperature demonstration of in-materio reservoir computing for optimizing Boolean function with single-walled carbon nanotube/porphyrin-polyoxometalate composite|6 Binary object classification with tactile sensory input information of via single-walled carbon nanotube/porphyrin-polyoxometalate network as in-materio reservoir computing|7 Future scope and Conclusion九州工業大学令和3年

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    Signatures of criticality arise in simple neural population models with correlations

    Full text link
    Large-scale recordings of neuronal activity make it possible to gain insights into the collective activity of neural ensembles. It has been hypothesized that neural populations might be optimized to operate at a 'thermodynamic critical point', and that this property has implications for information processing. Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells. What are the underlying mechanisms that give rise to these observations? Here we show that signatures of criticality arise even in simple feed-forward models of retinal population activity. In particular, they occur whenever neural population data exhibits correlations, and is randomly sub-sampled during data analysis. These results show that signatures of criticality are not necessarily indicative of an optimized coding strategy, and challenge the utility of analysis approaches based on equilibrium thermodynamics for understanding partially observed biological systems.Comment: 36 pages, LaTeX; added journal reference on page 1, added link to code repositor
    corecore