6,155 research outputs found

    Deciding the different robot roles for patient cognitive training

    Get PDF
    Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) represent a major challenge for health systems within the aging population. New and better instruments will be crucial to assess the disease severity and progression, as well as to improve its treatment, stimulation, and rehabilitation. With the purpose of detecting, assessing and quantifying cognitive impairments like MCI or AD, several methods are employed by clinical experts. Syndrom Kurztest neuropsychological battery (SKT) is a simple and short test to measure cognitive decline as it assesses memory, attention, and related cognitive functions, taking into account the speed of information processing. In this paper, we present a decision system to embed in a robot that can set up a productive interaction with a patient, and can be employed by the caregiver to motivate and support him while performing cognitive exercises as SKT. We propose two different interaction loops. First, the robot interacts with the caregiver in order to set up the mental and physical impairments of the patient and indicate a goal of the exercise. This is used to determine the desired robot behavior (human-centric or robot-centric, and preferred interaction modalities). Second, the robot interacts with the patient and adapts its actions to engage and assist him to complete the exercise. Two batches of experiments were conducted, and the results indicated that the robot can take profit of the initial interaction with the caregiver to provide a quicker personalization, and also it can adapt to different user responses and provide support and assistance at different levels of interaction.Peer ReviewedPostprint (author's final draft

    Learning robot policies using a high-level abstraction persona-behaviour simulator

    Get PDF
    2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksCollecting data in Human-Robot Interaction for training learning agents might be a hard task to accomplish. This is especially true when the target users are older adults with dementia since this usually requires hours of interactions and puts quite a lot of workload on the user. This paper addresses the problem of importing the Personas technique from HRI to create fictional patients’ profiles. We propose a Persona-Behaviour Simulator tool that provides, with high-level abstraction, user’s actions during an HRI task, and we apply it to cognitive training exercises for older adults with dementia. It consists of a Persona Definition that characterizes a patient along four dimensions and a Task Engine that provides information regarding the task complexity. We build a simulated environment where the high-level user’s actions are provided by the simulator and the robot initial policy is learned using a Q-learning algorithm. The results show that the current simulator provides a reasonable initial policy for a defined Persona profile. Moreover, the learned robot assistance has proved to be robust to potential changes in the user’s behaviour. In this way, we can speed up the fine-tuning of the rough policy during the real interactions to tailor the assistance to the given user. We believe the presented approach can be easily extended to account for other types of HRI tasks; for example, when input data is required to train a learning algorithm, but data collection is very expensive or unfeasible. We advocate that simulation is a convenient tool in these cases.Peer ReviewedPostprint (author's final draft

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Can We Agree on What Robots Should be Allowed to Do? An Exercise in Rule Selection for Ethical Care Robots

    Get PDF
    Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR

    Robot interaction adaptation for healthcare assistance

    Get PDF
    Assitive robotics is one of the big players in the technological revolution we are living in. Expectations are extremely high but the reality is a bit more modest. We present here two realistic initiatives towards the introduction of assistive robots in real care facilities and homes. First, a cognitive training robot for mild dementia patients, able to play board games following caregiver instructions and adapting to patient’s needs. Second, we present the Robotic MOVit, a novel exercise-enabling control interface for powered wheelchair users. Instead of using a joystick the user controls the direction and speed of the powered wheelchair by cyclically moving his arms. Both robotic devices can adapt the interaction to the needs of the user and provide insightful information to researchers and clinicians.Postprint (author's final draft

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Cognitive system framework for brain-training exercise based on human-robot interactif

    Get PDF
    “This is a post-peer-review, pre-copyedit version of an article published inCognitive computation. The final authenticated version is available online at: http://dx.doi.org/10.1007/s12559-019-09696-2Every 3 seconds, someone develops dementia worldwide. Brain-training exercises, preferably involving also physical activity, have shown their potential to monitor and improve the brain function of people affected by Alzheimer disease (AD) or mild cognitive impairment (MCI). This paper presents a cognitive robotic system designed to assist mild dementia patients during brain-training sessions of sorting tokens, an exercise inspired by the Syndrom KurzTest neuropsychological test (SKT). The system is able to perceive, learn and adapt to the user’s behaviour and is composed of two main modules. The adaptive module based on representing the human-robot interaction as a planning problem, that can adapt to the user performance offering different encouragement and recommendation actions using both verbal and gesture communication in order to minimize the time spent to solve the exercise. As safety is a very important issue, the cognitive system is enriched with a safety module that monitors the possibility of physical contact and reacts accordingly. The cognitive system is presented as well as its embodiment in a real robot. Simulated experiments are performed to (i) evaluate the adaptability of the system to different patient use-cases and (ii) validate the coherence of the proposed safety module. A real experiment in the lab, with able users, is used as preliminary evaluation to validate the overall approach. Results in laboratory conditions show that the two presented modules effectively provide additional and essential functionalities to the system, although further work is necessary to guarantee robustness and timely response of the robot before testing it with patientsPeer ReviewedPostprint (author's final draft

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling
    • …
    corecore