163 research outputs found

    How women think robots perceive them – as if robots were men

    Get PDF

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes.The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally.The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows:- The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically.- The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems.- The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL.A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis

    CernoCAMAL : a probabilistic computational cognitive architecture

    Get PDF
    This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes. The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally. The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows: - The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically. - The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems. - The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL. A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis

    A Personalized Support Agent for Depressed Patients

    Get PDF

    A society of mind approach to cognition and metacognition in a cognitive architecture

    Get PDF
    This thesis investigates the concept of mind as a control system using the "Society of Agents" metaphor. "Society of Agents" describes collective behaviours of simple and intelligent agents. "Society of Mind" is more than a collection of task-oriented and deliberative agents; it is a powerful concept for mind research and can benefit from the use of metacognition. The aim is to develop a self configurable computational model using the concept of metacognition. A six tiered SMCA (Society of Mind Cognitive Architecture) control model is designed that relies on a society of agents operating using metrics associated with the principles of artificial economics in animal cognition. This research investigates the concept of metacognition as a powerful catalyst for control, unify and self-reflection. Metacognition is used on BDI models with respect to planning, reasoning, decision making, self reflection, problem solving, learning and the general process of cognition to improve performance.One perspective on how to develop metacognition in a SMCA model is based on the differentiation between metacognitive strategies and metacomponents or metacognitive aids. Metacognitive strategies denote activities such as metacomphrension (remedial action) and metamanagement (self management) and schema training (meaning full learning over cognitive structures). Metacomponents are aids for the representation of thoughts. To develop an efficient, intelligent and optimal agent through the use of metacognition requires the design of a multiple layered control model which includes simple to complex levels of agent action and behaviours. This SMCA model has designed and implemented for six layers which includes reflexive, reactive, deliberative (BDI), learning (Q-Ieamer), metacontrol and metacognition layers

    Human–agent team dynamics: a review and future research opportunities

    Get PDF
    Humans teaming with intelligent autonomous agents is becoming indispensable in work environments. However, human–agent teams pose significant challenges, as team dynamics are complex arising from the task and social aspects of human–agent interactions. To improve our understanding of human–agent team dynamics, in this article, we conduct a systematic literature review. Drawing on Mathieu et al.’s (2019) teamwork model developed for all-human teams, we map the landscape of research to human–agent team dynamics, including structural features, compositional features, mediating mechanisms, and the interplay of the above features and mechanisms. We reveal that the development of human–agent team dynamics is still nascent, with a particular focus on information sharing, trust development, agents’ human likeness behaviors, shared cognitions, situation awareness, and function allocation. Gaps remain in many areas of team dynamics, such as team processes, adaptability, shared leadership, and team diversity. We offer various interdisciplinary pathways to advance research on human–agent teams

    Affective Motivational Collaboration Theory

    Get PDF
    Existing computational theories of collaboration explain some of the important concepts underlying collaboration, e.g., the collaborators\u27 commitments and communication. However, the underlying processes required to dynamically maintain the elements of the collaboration structure are largely unexplained. Our main insight is that in many collaborative situations acknowledging or ignoring a collaborator\u27s affective state can facilitate or impede the progress of the collaboration. This implies that collaborative agents need to employ affect-related processes that (1) use the collaboration structure to evaluate the status of the collaboration, and (2) influence the collaboration structure when required. This thesis develops a new affect-driven computational framework to achieve these objectives and thus empower agents to be better collaborators. Contributions of this thesis are: (1) Affective Motivational Collaboration (AMC) theory, which incorporates appraisal processes into SharedPlans theory. (2) New computational appraisal algorithms based on collaboration structure. (3) Algorithms such as goal management, that use the output of appraisal to maintain collaboration structures. (4) Implementation of a computational system based on AMC theory. (5) Evaluation of AMC theory via two user studies to a) validate our appraisal algorithms, and b) investigate the overall functionality of our framework within an end-to-end system with a human and a robot

    Good People Don\u27t Need Medication: How Moral Character Beliefs Affect Medical Decision-Making

    Get PDF
    How do people make decisions? Prior research focuses on how people\u27s cost-benefit assessments affect which medical treatments they choose. We propose that people also worry about what these health decisions signal about who they are. Across four studies, we find that medication is thought to be the easy way out , signaling a lack of willpower and character. These moral beliefs lower the appeal of medications. Manipulating these beliefs--by framing medication as a signal of superior willpower or by highlighting the idea that treatment choice is just a preference--increases preferences for medication

    The Social Consequences of Absolute Moral Proclamations

    Get PDF
    Across six studies (N = 3348), we find that people prefer targets who make absolute proclamations (i.e. It is never okay for people to lie ) over targets who make ambiguous proclamations ( It is sometimes okay for people to lie ), even when both targets tell equivalent lies. Preferences for absolutism stem from the belief that moral proclamations send a true signal about moral character--they are not cheap talk. Therefore, absolute proclamations signal moral character, despite also signaling hypocrisy. This research sheds light on the consequences of absolute proclamations and identifies circumstances in which hypocrisy is preferred over consistency
    corecore