2,731 research outputs found

    Methodology and themes of human-robot interaction: a growing research field

    Get PDF
    Original article can be found at: http://www.intechweb.org/journal.php?id=3 Distributed under the Creative Commons Attribution License. Users are free to read, print, download and use the content or part of it so long as the original author(s) and source are correctly credited.This article discusses challenges of Human-Robot Interaction, which is a highly inter- and multidisciplinary area. Themes that are important in current research in this lively and growing field are identified and selected work relevant to these themes is discussed.Peer reviewe

    What is a robot companion - friend, assistant or butler?

    Get PDF
    The study presented in this paper explored people's perceptions and attitudes towards the idea of a future robot companion for the home. A human-centred approach was adopted using questionnaires and human-robot interaction trials to derive data from 28 adults. Results indicated that a large proportion of participants were in favour of a robot companion and saw the potential role as being an assistant, machine or servant. Few wanted a robot companion to be a friend. Household tasks were preferred to child/animal care tasks. Humanlike communication was desirable for a robot companion, whereas humanlike behaviour and appearance were less essential. Results are discussed in relation to future research directions for the development of robot companions

    Studies on user control in ambient intelligent systems

    Get PDF
    People have a deeply rooted need to experience control and be effective in interactions with their environments. At present times, we are surrounded by intelligent systems that take decisions and perform actions for us. This should make life easier, but there is a risk that users experience less control and reject the system. The central question in this thesis is whether we can design intelligent systems that have a degree of autonomy, while users maintain a sense of control. We try to achieve this by giving the intelligent system an 'expressive interface’: the part that provides information to the user about the internal state, intentions and actions of the system. We examine this question both in the home and the work environment.We find the notion of a ‘system personality’ useful as a guiding principle for designing interactions with intelligent systems, for domestic robots as well as in building automation. Although the desired system personality varies per application, in both domains a recognizable system personality can be designed through expressive interfaces using motion, light, sound, and social cues. The various studies show that the level of automation and the expressive interface can influence the perceived system personality, the perceived level of control, and user’s satisfaction with the system. This thesis shows the potential of the expressive interface as an instrument to help users understand what is going on inside the system and to experience control, which might be essential for the successful adoption of the intelligent systems of the future.<br/

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    The role of trust in proactive conversational assistants

    Get PDF
    Humans and machines harmoniously collaborating and bene ting from each other is a long lasting dream for researchers in robotics and arti cial intelligence. An important feature of ef cient and rewarding cooperation is the ability to assume possible problematic situations and act in advance to prevent negative outcomes. This concept of assistance is known under the term proactivity. In this article, we investigate the development and implementation of proactive dialogues for fostering a trustworthy human-computer relationship and providing adequate and timely assistance. Here, we make several contributions. A formalisation of proactive dialogue in conversational assistants is provided. The formalisation forms a framework for integrating proactive dialogue in conversational applications. Additionally, we present a study showing the relations between proactive dialogue actions and several aspects of the perceived trustworthiness of a system as well as effects on the user experience. The results of the experiments provide signi cant contributions to the line of proactive dialogue research. Particularly, we provide insights on the effects of proactive dialogue on the human-computer trust relationship and dependencies between proactive dialogue and user specific and situational characteristics

    Assistive technology design and development for acceptable robotics companions for ageing years

    Get PDF
    © 2013 Farshid Amirabdollahian et al., licensee Versita Sp. z o. o. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs license, which means that the text may be used for non-commercial purposes, provided credit is given to the author.A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developmentsPeer reviewe

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration

    Future bathroom: A study of user-centred design principles affecting usability, safety and satisfaction in bathrooms for people living with disabilities

    Get PDF
    Research and development work relating to assistive technology 2010-11 (Department of Health) Presented to Parliament pursuant to Section 22 of the Chronically Sick and Disabled Persons Act 197

    A matter of consequences: Understanding the effects of robot errors on people's trust in HRI

    Get PDF
    On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: 1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, 2) individual differences, 3) the dynamics of human-robot trust, and 4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences
    • …
    corecore