2,410 research outputs found

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    The role of structure and complexity on Reservoir Computing quality

    Get PDF
    We explore the effect of structure and connection complexity on the dynamical behaviour of Reservoir Computers (RC). At present, considerable effort is taken to design and hand-craft physical reservoir computers. Both structure and physical complexity are often pivotal to task performance, however, assessing their overall importance is challenging. Using a recently proposed framework, we evaluate and compare the dynamical freedom (referring to quality) of neural network structures, as an analogy for physical systems. The results quantify how structure affects the range of behaviours exhibited by these networks. It highlights that high quality reached by more complex structures is often also achievable in simpler structures with greater network size. Alternatively, quality is often improved in smaller networks by adding greater connection complexity. This work demonstrates the benefits of using abstract behaviour representation, rather than evaluation through benchmark tasks, to assess the quality of computing substrates, as the latter typically has biases, and often provides little insight into the complete computing quality of physical systems

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Reservoir Computing in Materio

    Get PDF
    Reservoir Computing first emerged as an efficient mechanism for training recurrent neural networks and later evolved into a general theoretical model for dynamical systems. By applying only a simple training mechanism many physical systems have become exploitable unconventional computers. However, at present, many of these systems require careful selection and tuning by hand to produce usable or optimal reservoir computers. In this thesis we show the first steps to applying the reservoir model as a simple computational layer to extract exploitable information from complex material substrates. We argue that many physical substrates, even systems that in their natural state might not form usable or "good" reservoirs, can be configured into working reservoirs given some stimulation. To achieve this we apply techniques from evolution in materio whereby configuration is through evolved input-output signal mappings and targeted stimuli. In preliminary experiments the combined model and configuration method is applied to carbon nanotube/polymer composites. The results show substrates can be configured and trained as reservoir computers of varying quality. It is shown that applying the reservoir model adds greater functionality and programmability to physical substrates, without sacrificing performance. Next, the weaknesses of the technique are addressed, with the creation of new high input-output hardware system and an alternative multi-substrate framework. Lastly, a substantial effort is put into characterising the quality of a substrate for reservoir computing, i.e its ability to realise many reservoirs. From this, a methodological framework is devised. Using the framework, radically different computing substrates are compared and assessed, something previously not possible. As a result, a new understanding of the relationships between substrate, tasks and properties is possible, outlining the way for future exploration and optimisation of new computing substrates

    Reservoir computing quality : connectivity and topology

    Get PDF
    We explore the effect of connectivity and topology on the dynamical behaviour of Reservoir Computers. At present, considerable effort is taken to design and hand-craft physical reservoir computers. Both structure and physical complexity are often pivotal to task performance, however, assessing their overall importance is challenging. Using a recently developed framework, we evaluate and compare the dynamical freedom (referring to quality) of neural network structures, as an analogy for physical systems. The results quantify how structure affects the behavioural range of networks. It demonstrates how high quality reached by more complex structures is often also achievable in simpler structures with greater network size. Alternatively, quality is often improved in smaller networks by adding greater connection complexity. This work demonstrates the benefits of using dynamical behaviour to assess the quality of computing substrates, rather than evaluation through benchmark tasks that often provide a narrow and biased insight into the computing quality of physical systems

    Designing Computational Substrates using Open-Ended Evolution

    Get PDF
    Evolutionary algorithms are powerful tools to discover novel and diverse solutions to complex problems. Here, we discuss how open-ended algorithms, such as novelty search, can be used to design and evaluate new unconventional computing systems, from the design of materials to the creation of new computational models
    corecore