195 research outputs found

    Cognitive Robots and the Conscious Mind: A Review of the Global Workspace Theory

    Get PDF
    Abstract Purpose of Review The theory of consciousness is a subject that has kept scholars and researchers challenged for centuries. Even today it is not possible to define what consciousness is. This has led to the theorization of different models of consciousness. Starting from Baars' Global Workspace Theory, this paper examines the models of cognitive architectures that are inspired by it and that can represent a reference point in the field of robot consciousness. Recent Findings Global Workspace Theory has recently been ranked as the most promising theory in its field. However, this is not reflected in the mathematical models of cognitive architectures inspired by it: they are few, and most of them are a decade old, which is too long compared to the speed at which artificial intelligence techniques are improving. Indeed, recent publications propose simple mathematical models that are well designed for computer implementation. Summary In this paper, we introduce an overview of consciousness and robot consciousness, with some interesting insights from the literature. Then we focus on Baars' Global Workspace Theory, presenting it briefly. Finally, we report on the most interesting and promising models of cognitive architectures that implement it, describing their peculiarities

    Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals

    Get PDF
    Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain's spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors' knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics

    A Computational Investigation of Neural Dynamics and Network Structure

    No full text
    With the overall goal of illuminating the relationship between neural dynamics and neural network structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast and competition, and b) a study of various convergence properties of spike-timing dependent plasticity (STDP) in a recurrent neural network. The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW) realised in a novel computational network model using stochastic connectivity. The structure of this model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and competition. It is found even with careful consideration of the balance between excitation and inhibition, the structural choices do not allow agreement with the GNW dynamics, and the implications of this are addressed. An additional level of competition – access competition – is added, discussed, and found to be more conducive to winner-takes-all competition. The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights, while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses, and those specifically stimulated during heterogeneous stimulation win out. Further investigation is carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics, varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or theoretical evidence suggests otherwise

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    Modeling the Synchronization of Multimodal Perceptions as a Basis for the Emergence of Deterministic Behaviors.

    Get PDF
    Living organisms have either innate or acquired mechanisms for reacting to percepts with an appropriate behavior e.g., by escaping from the source of a perception detected as threat, or conversely by approaching a target perceived as potential food. In the case of artifacts, such capabilities must be built in through either wired connections or software. The problem addressed here is to define a neural basis for such behaviors to be possibly learned by bio-inspired artifacts. Toward this end, a thought experiment involving an autonomous vehicle is first simulated as a random search. The stochastic decision tree that drives this behavior is then transformed into a plastic neuronal circuit. This leads the vehicle to adopt a deterministic behavior by learning and applying a causality rule just as a conscious human driver would do. From there, a principle of using synchronized multimodal perceptions in association with the Hebb principle of wiring together neuronal cells is induced. This overall framework is implemented as a virtual machine i.e., a concept widely used in software engineering. It is argued that such an interface situated at a meso-scale level between abstracted micro-circuits representing synaptic plasticity, on one hand, and that of the emergence of behaviors, on the other, allows for a strict delineation of successive levels of complexity. More specifically, isolating levels allows for simulating yet unknown processes of cognition independently of their underlying neurological grounding

    Computational animal welfare: Towards cognitive architecture models of animal sentience, emotion and wellbeing

    Get PDF
    To understand animal wellbeing, we need to consider subjective phenomena and sentience. This is challenging, since these properties are private and cannot be observed directly. Certain motivations, emotions and related internal states can be inferred in animals through experiments that involve choice, learning, generalization and decision-making. Yet, even though there is significant progress in elucidating the neurobiology of human consciousness, animal consciousness is still a mystery. We propose that computational animal welfare science emerges at the intersection of animal behaviour, welfare and computational cognition. By using ideas from cognitive science, we develop a functional and generic definition of subjective phenomena as any process or state of the organism that exists from the first-person perspective and cannot be isolated from the animal subject. We then outline a general cognitive architecture to model simple forms of subjective processes and sentience. This includes evolutionary adaptation which contains top-down attention modulation, predictive processing and subjective simulation by re-entrant (recursive) computations. Thereafter, we show how this approach uses major characteristics of the subjective experience: elementary self-awareness, global workspace and qualia with unity and continuity. This provides a formal framework for process-based modelling of animal needs, subjective states, sentience and wellbeing.publishedVersio

    Module hierarchy and centralisation in the anatomy and dynamics of human cortex

    Get PDF
    Systems neuroscience has recently unveiled numerous fundamental features of the macroscopic architecture of the human brain, the connectome, and we are beginning to understand how characteristics of brain dynamics emerge from the underlying anatomical connectivity. The current work utilises complex network analysis on a high-resolution structural connectivity of the human cortex to identify generic organisation principles, such as centralised, modular and hierarchical properties, as well as specific areas that are pivotal in shaping cortical dynamics and function. After confirming its small-world and modular architecture, we characterise the cortex’ multilevel modular hierarchy, which appears to be reasonably centralised towards the brain’s strong global structural core. The potential functional importance of the core and hub regions is assessed by various complex network metrics, such as integration measures, network vulnerability and motif spectrum analysis. Dynamics facilitated by the large-scale cortical topology is explored by simulating coupled oscillators on the anatomical connectivity. The results indicate that cortical connectivity appears to favour high dynamical complexity over high synchronizability. Taking the ability to entrain other brain regions as a proxy for the threat posed by a potential epileptic focus in a given region, we also show that epileptic foci in topologically more central areas should pose a higher epileptic threat than foci in more peripheral areas. To assess the influence of macroscopic brain anatomy in shaping global resting state dynamics on slower time scales, we compare empirically obtained functional connectivity data with data from simulating dynamics on the structural connectivity. Despite considerable micro-scale variability between the two functional connectivities, our simulations are able to approximate the profile of the empirical functional connectivity. Our results outline the combined characteristics a hierarchically modular and reasonably centralised macroscopic architecture of the human cerebral cortex, which, through these topological attributes, appears to facilitate highly complex dynamics and fundamentally shape brain function

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    The brain's connective core and its role in animal cognition

    No full text
    This paper addresses the question of how the brain of an animal achieves cognitive integration—that is to say how it manages to bring its fullest resources to bear on an ongoing situation. To fully exploit its cognitive resources, whether inherited or acquired through experience, it must be possible for unanticipated coalitions of brain processes to form. This facilitates the novel recombination of the elements of an existing behavioural repertoire, and thereby enables innovation. But in a system comprising massively many anatomically distributed assemblies of neurons, it is far from clear how such open-ended coalition formation is possible. The present paper draws on contemporary findings in brain connectivity and neurodynamics, as well as the literature of artificial intelligence, to outline a possible answer in terms of the brain's most richly connected and topologically central structures, its so-called connective core
    corecore