318 research outputs found

    Insects have the capacity for subjective experience

    Get PDF
    To what degree are non-human animals conscious? We propose that the most meaningful way to approach this question is from the perspective of functional neurobiology. Here we focus on subjective experience, which is a basic awareness of the world without further reflection on that awareness. This is considered the most basic form of consciousness. Tellingly, this capacity is supported by the integrated midbrain and basal ganglia structures, which are among the oldest and most highly conserved brain systems in vertebrates. A reasonable inference is that the capacity for subjective experience is both widespread and evolutionarily old within the vertebrate lineage. We argue that the insect brain supports functions analogous to those of the vertebrate midbrain and hence that insects may also have a capacity for subjective experience. We discuss the features of neural systems which can and cannot be expected to support this capacity as well as the relationship between our arguments based on neurobiological mechanism and our approach to the “hard problem” of conscious experience

    Multiscale computation and dynamic attention in biological and artificial intelligence

    Get PDF
    Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Perceptual reality monitoring: Neural mechanisms dissociating imagination from reality

    Get PDF
    There is increasing evidence that imagination relies on similar neural mechanisms as externally triggered perception. This overlap presents a challenge for perceptual reality monitoring: deciding what is real and what is imagined. Here, we explore how perceptual reality monitoring might be implemented in the brain. We first describe sensory and cognitive factors that could dissociate imagery and perception and conclude that no single factor unambiguously signals whether an experience is internally or externally generated. We suggest that reality monitoring is implemented by higher-level cortical circuits that evaluate first-order sensory and cognitive factors to determine the source of sensory signals. According to this interpretation, perceptual reality monitoring shares core computations with metacognition. This multi-level architecture might explain several types of source confusion as well as dissociations between simply knowing whether something is real and actually experiencing it as real. We discuss avenues for future research to further our understanding of perceptual reality monitoring, an endeavour that has important implications for our understanding of clinical symptoms as well as general cognitive function

    An integrative, multiscale view on neural theories of consciousness.

    Get PDF
    How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories

    A PROBABILISTIC APPROACH TO THE CONSTRUCTION OF A MULTIMODAL AFFECT SPACE

    Get PDF
    Understanding affective signals from others is crucial for both human-human and human-agent interaction. The automatic analysis of emotion is by and large addressed as a pattern recognition problem which grounds in early psychological theories of emotion. Suitable features are first extracted and then used as input to classification (discrete emotion recognition) or regression (continuous affect detection). In this thesis, differently from many computational models in the literature, we draw on a simulationist approach to the analysis of facially displayed emotions - e.g., in the course of a face-to-face interaction between an expresser and an observer. At the heart of such perspective lies the enactment of the perceived emotion in the observer. We propose a probabilistic framework based on a deep latent representation of a continuous affect space, which can be exploited for both the estimation and the enactment of affective states in a multimodal space. Namely, we consider the observed facial expression together with physiological activations driven by internal autonomic activity. The rationale behind the approach lies in the large body of evidence from affective neuroscience showing that when we observe emotional facial expressions, we react with congruent facial mimicry. Further, in more complex situations, affect understanding is likely to rely on a comprehensive representation grounding the reconstruction of the state of the body associated with the displayed emotion. We show that our approach can address such problems in a unified and principled perspective, thus avoiding ad hoc heuristics while minimising learning efforts. Moreover, our model improves the inferred belief through the adoption of an inner loop of measurements and predictions within the central affect state-space, that realise the dynamics of the affect enactment. Results so far achieved have been obtained by adopting two publicly available multimodal corpora

    Annotated Bibliography: Anticipation

    Get PDF

    Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

    Full text link
    Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators
    • …
    corecore