22,642 research outputs found

    Formulating Consciousness: A Comparative Analysis of Searle’s and Dennett’s Theory of Consciousness

    Get PDF
    This research will argue about which theory of mind between Searle’s and Dennett’s can better explain human consciousness. Initially, distinctions between dualism and materialism will be discussed ranging from substance dualism, property dualism, physicalism, and functionalism. In this part, the main issue that is tackled in various theories of mind is revealed. It is the missing connection between input stimulus (neuronal reactions) and behavioral disposition: consciousness. Then, the discussion will be more specific on Searle’s biological naturalism and Dennett’s multiple drafts model as the two attempted to answer the issue. The differences between them will be highlighted and will be analyzed according to their relation to their roots: dualism and materialism. The two theories will be examined on how each answer the questions on consciousness

    A Tale of Two Animats: What does it take to have goals?

    Full text link
    What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.Comment: This article is a contribution to the FQXi 2016-2017 essay contest "Wandering Towards a Goal

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development

    Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework

    Full text link
    In this paper, we argue that the future of Artificial Intelligence research resides in two keywords: integration and embodiment. We support this claim by analyzing the recent advances of the field. Regarding integration, we note that the most impactful recent contributions have been made possible through the integration of recent Machine Learning methods (based in particular on Deep Learning and Recurrent Neural Networks) with more traditional ones (e.g. Monte-Carlo tree search, goal babbling exploration or addressable memory systems). Regarding embodiment, we note that the traditional benchmark tasks (e.g. visual classification or board games) are becoming obsolete as state-of-the-art learning algorithms approach or even surpass human performance in most of them, having recently encouraged the development of first-person 3D game platforms embedding realistic physics. Building upon this analysis, we first propose an embodied cognitive architecture integrating heterogenous sub-fields of Artificial Intelligence into a unified framework. We demonstrate the utility of our approach by showing how major contributions of the field can be expressed within the proposed framework. We then claim that benchmarking environments need to reproduce ecologically-valid conditions for bootstrapping the acquisition of increasingly complex cognitive skills through the concept of a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017 conference (Lisbon, Portugal

    When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents

    Full text link
    An agent's actions can be influenced by external factors through the inputs it receives from the environment, as well as internal factors, such as memories or intrinsic preferences. The extent to which an agent's actions are "caused from within", as opposed to being externally driven, should depend on its sensor capacity as well as environmental demands for memory and context-dependent behavior. Here, we test this hypothesis using simulated agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve to solve a perceptual-categorization task under conditions varied with regards to the agents' sensor capacity and task difficulty. Using a novel formalism developed to identify and quantify the actual causes of occurrences ("what caused what?") in complex networks, we evaluate the direct causes of the animats' actions. In addition, we extend this framework to trace the causal chain ("causes of causes") leading to an animat's actions back in time, and compare the obtained spatio-temporal causal history across task conditions. We found that measures quantifying the extent to which an animat's actions are caused by internal factors (as opposed to being driven by the environment through its sensors) varied consistently with defining aspects of the task conditions they evolved to thrive in.Comment: Submitted and accepted to Alife 2019 conference. Revised version: edits include adding more references to relevant work and clarifying minor points in response to reviewer

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Social Situatedness: Vygotsky and Beyond

    Get PDF
    The concept of ‘social situatedness’, i.e. the idea that the development of individual intelligence requires a social (and cultural) embedding, has recently received much attention in cognitive science and artificial intelligence research. The work of Lev Vygotsky who put forward this view already in the 1920s has influenced the discussion to some degree, but still remains far from well known. This paper therefore aims to give an overview of his cognitive development theory and discuss its relation to more recent work in primatology and socially situated artificial intelligence, in particular humanoid robotics
    • 

    corecore