2,628 research outputs found

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development

    AI Extenders and the Ethics of Mental Health

    Get PDF
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this chapter we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other ‘artificially intelligent’ tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges

    Artificial Intelligence in Education

    Get PDF
    Artificial Intelligence (AI) technologies have been researched in educational contexts for more than 30 years (Woolf 1988; Cumming and McDougall 2000; du Boulay 2016). More recently, commercial AI products have also entered the classroom. However, while many assume that Artificial Intelligence in Education (AIED) means students taught by robot teachers, the reality is more prosaic yet still has the potential to be transformative (Holmes et al. 2019). This chapter introduces AIED, an approach that has so far received little mainstream attention, both as a set of technologies and as a field of inquiry. It discusses AIED’s AI foundations, its use of models, its possible future, and the human context. It begins with some brief examples of AIED technologies

    A crucial psycholinguistic prerequisite to reading: Children's metalinguistic awareness

    Get PDF

    Being a Teacher in an Era of Uncertainty and Perplexity

    Get PDF
    The ambitious challenges of the contemporary digital age require the development in each citizen of cognitive and affective capacities of a higher order, which allow expert thinking and effective communication, decision making in situations of uncertainty, problem-solving, and innovative proposals in economic, cultural, and political contexts, increasingly confusing, fleeting and complex. The text discusses the nature and meaning of a new school, a new pedagogical culture, and a new professional teacher to face the magnitude of these challenges: provoke, guide, and stimulate the passage of each learner from information to knowledge and knowledge to wisdom. More specifically, the formation of the “practical thinking” of contemporary teachers is analyzed and discussed as one of the key axes of their satisfactory professional development. What does this “practical thinking” mean in the initial and ongoing teacher training? Is it possible to develop “practical thinking” in the current Spanish institutions of teacher training

    A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents

    Get PDF
    Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of Artificial General Intelligence, or AGI. Moral decision making is arguably one of the most challenging tasks for computational approaches to higher order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics or Friendly AI. In this paper we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global Workspace Theory (GWT), proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin et al. 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision making process, and elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors

    2018 Annual Research Symposium Abstract Book

    Get PDF
    2018 annual volume of abstracts for science research projects conducted by students at Trinity College
    • 

    corecore