12,709 research outputs found

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development

    The Turing Test and the Zombie Argument

    Get PDF
    In this paper I shall try to put some implications concerning the Turing's test and the so-called Zombie arguments into the context of philosophy of mind. My intention is not to compose a review of relevant concepts, but to discuss central problems, which originate from the Turing's test - as a paradigm of computational theory of mind - with the arguments, which refute sustainability of this thesis. In the first section (Section I), I expose the basic computationalist presuppositions; by examining the premises of the Turing Test (TT) I argue that the TT, as a functionalist paradigm concept, underlies the computational theory of mind. I treat computationalism as a thesis that defines the human cognitive system as a physical, symbolic and semantic system, in such a manner that the description of its physical states is isomorphic with the description of its symbolic conditions, so that this isomorphism is semantically interpretable. In the second section (Section II), I discuss the Zombie arguments, and the epistemological-modal problems connected with them, which refute sustainability of computationalism. The proponents of the Zombie arguments build their attack on the computationalism on the basis of thought experiments with creatures behaviorally, functionally and physically indistinguishable from human beings, though these creatures do not have phenomenal experiences. According to the consequences of these thought experiments - if zombies are possible, then, the computationalism doesn't offer a satisfying explanation of consciousness. I compare my thesis from Section 1, with recent versions of Zombie arguments, which claim that computationalism fails to explain qualitative phenomenal experience. I conclude that despite the weaknesses of computationalism, which are made obvious by zombie-arguments, these arguments are not the last word when it comes to explanatory force of computationalism

    Artificial Brains and Hybrid Minds

    Get PDF
    The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the respective ‘components’ of thinking systems, while the ‘stuff’ of material implementation is of secondary importance. Thus the paper extends the received animat paradigm, and investigates some intriguing consequences of expanding the conception of bio-machine hybrids to include abstract functional and semantic structure. In particular, the thought experiments consider cases of mind-machine merger where there is no physical Brain-Machine Interface: indeed, the material human body and brain have been removed from the picture altogether. The first experiment illustrates some intrinsic theoretical difficulties in attempting to replicate the human mind in an alternative material medium, while the second reveals some deep conceptual problems in attempting to create a form of truly Artificial General Intelligence

    Artificial consciousness and the consciousness-attention dissociation

    Get PDF
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness

    An interoceptive predictive coding model of conscious presence

    Get PDF
    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness

    The Mental Database

    Get PDF
    This article uses database, evolution and physics considerations to suggest how the mind stores and processes its data. Its innovations in its approach lie in:- A) The comparison between the capabilities of the mind to those of a modern relational database while conserving phenomenality. The strong functional similarity of the two systems leads to the conclusion that the mind may be profitably described as being a mental database. The need for material/mental bridging and addressing indexes is discussed. B) The consideration of what neural correlates of consciousness (NCC) between sensorimotor data and instrumented observation one can hope to obtain using current biophysics. It is deduced that what is seen using the various brain scanning methods reflects only that part of current activity transactions (e.g. visualizing) which update and interrogate the mind, but not the contents of the integrated mental database which constitutes the mind itself. This approach yields reasons why there is much neural activity in an area to which a conscious function is ascribed (e.g. the amygdala is associated with fear), yet there is no visible part of its activity which can be clearly identified as phenomenal. The concept is then situated in a Penrosian expanded physical environment, requiring evolutionary continuity, modularity and phenomenality.Several novel Darwinian advantages arising from the approach are described

    Robot Models of Mental Disorders

    Get PDF
    Alongside technological tools to support wellbeing and treatment of mental disorders, models of these disorders can also be invaluable tools to understand, support and improve these conditions. Robots can provide ecologically valid models that take into account embodiment-, interaction-, and context-related elements. Focusing on Obsessive-Compulsive spectrum disorders, in this paper we discuss some of the potential contributions of robot models and relate them to other models used in psychology and psychiatry, particularly animal models. We also present some initial recommendations for their meaningful design and rigorous use.Final Accepted Versio
    corecore