90,313 research outputs found

    Brain-inspired conscious computing architecture

    Get PDF
    What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human

    Brain-Inspired Computing

    Get PDF
    This open access book constitutes revised selected papers from the 4th International Workshop on Brain-Inspired Computing, BrainComp 2019, held in Cetraro, Italy, in July 2019. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They deal with research on brain atlasing, multi-scale models and simulation, HPC and data infra-structures for neuroscience as well as artificial and natural neural architectures

    Towards brain-inspired computing

    Get PDF
    We present introductory considerations and analysis toward computing applications based on the recently introduced deterministic logic scheme with random spike (pulse) trains [Phys. Lett. A 373 (2009) 2338-2342]. Also, in considering the questions, "Why random?" and "Why pulses?", we show that the random pulse based scheme provides the advantages of realizing multivalued deterministic logic. Pulse trains are realized by an element called orthogonator. We discuss two different types of orthogonators, parallel (intersection-based) and serial (demultiplexer-based) orthogonators. The last one can be slower but it makes sequential logic design straightforward. We propose generating a multidimensional logic hyperspace [Physics Letters A 373 (2009) 1928-1934] by using the zero-crossing events of uncorrelated Gaussian electrical noises available in the chips. The spike trains in the hyperspace are non-overlapping, and are referred to as neuro-bits. To demonstrate this idea, we generate 3-dimensional hyperspace bases using 2 Gaussian noises as sources for neuro-bits, respectively. In such a scenario, the detection of different hyperspace basis elements may have vastly differing delays. We show that it is possible to provide an identical speed for all the hyperspace bases elements using correlated noise sources, and demonstrate this for the 2 neuro-bits situations. The key impact of this paper is to demonstrate that a logic design approach using such neuro-bits can yield a fast, low power processing and environmental variation tolerant means of designing computer circuitry. It also enables the realization of multi-valued logic, significantly increasing the complexity of computer circuits by allowing several neuro-bits to be transmitted on a single wire.Comment: 10 page

    Brain-Inspired Conscious Computing Architecture

    Get PDF
    What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such systems, guided and limited by associative memory, is similar to the stream of consciousness. A specific architecture of an artificial system, termed articon, is introduced that by its very design has to claim being conscious. Non-verbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the flow of inner states of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills — when conscious information processing is replaced by subconscious — is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute articon’s claims that it is conscious. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to huma

    Sequence learning in Associative Neuronal-Astrocytic Network

    Full text link
    The neuronal paradigm of studying the brain has left us with limitations in both our understanding of how neurons process information to achieve biological intelligence and how such knowledge may be translated into artificial intelligence and its most brain-derived branch, neuromorphic computing. Overturning our fundamental assumptions of how the brain works, the recent exploration of astrocytes is revealing that these long-neglected brain cells dynamically regulate learning by interacting with neuronal activity at the synaptic level. Following recent experimental evidence, we designed an associative, Hopfield-type, neuronal-astrocytic network and analyzed the dynamics of the interaction between neurons and astrocytes. We show that astrocytes were sufficient to trigger transitions between learned memories in the neuronal component of the network. Further, we mathematically derived the timing of the transitions that was governed by the dynamics of the calcium-dependent slow-currents in the astrocytic processes. Overall, we provide a brain-morphic mechanism for sequence learning that is inspired by, and aligns with, recent experimental findings. To evaluate our model, we emulated astrocytic atrophy and showed that memory recall becomes significantly impaired after a critical point of affected astrocytes was reached. This brain-inspired and brain-validated approach supports our ongoing efforts to incorporate non-neuronal computing elements in neuromorphic information processing.Comment: 8 pages, 5 figure

    Brain-Inspired Computing

    Full text link
    • …
    corecore