15 research outputs found

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure

    Does the personality of consumers influence the assessment of the experience of interaction with social robots?

    Get PDF
    In recent years, in response to the effects of Covid-19, there has been an increase in the use of social robots in service organisations, as well as in the number of interactions between consumers and robots. However, it is not clear how consumers are valuing these experiences or what the main drivers that shape them are. Furthermore, it is an open research question whether these experiences undergone by consumers can be affected by their own personality. This study attempts to shed some light on these questions and, to do so, an experiment is proposed in which a sample of 378 participants evaluate a simulated front-office service experience delivered by a social robot. The authors investigate the underlying process that explains the experience and find that cognitive-functional factors, emphasising efficiency, have practically the same relevance as emotional factors, emphasising stimulation. In addition, this research identifies the personality traits of the participants and explores their moderating role in the evaluation of the experience. The results reveal that each personality trait, estimated between high and low poles, generates different responses in the evaluation of the experience

    Does the personality of consumers influence the assessment of the experience of interaction with social robots?

    Get PDF
    In recent years, in response to the effects of Covid-19, there has been an increase in the use of social robots in service organisations, as well as in the number of interactions between consumers and robots. However, it is not clear how consumers are valuing these experiences or what the main drivers that shape them are. Furthermore, it is an open research question whether these experiences undergone by consumers can be affected by their own personality. This study attempts to shed some light on these questions and, to do so, an experiment is proposed in which a sample of 378 participants evaluate a simulated front-office service experience delivered by a social robot. The authors investigate the underlying process that explains the experience and find that cognitive-functional factors, emphasising efficiency, have practically the same relevance as emotional factors, emphasising stimulation. In addition, this research identifies the personality traits of the participants and explores their moderating role in the evaluation of the experience. The results reveal that each personality trait, estimated between high and low poles, generates different responses in the evaluation of the experience.Peer ReviewedPostprint (published version

    A theoretical and practical approach to a persuasive agent model for change behaviour in oral care and hygiene

    Get PDF
    There is an increased use of the persuasive agent in behaviour change interventions due to the agent‘s features of sociable, reactive, autonomy, and proactive. However, many interventions have been unsuccessful, particularly in the domain of oral care. The psychological reactance has been identified as one of the major reasons for these unsuccessful behaviour change interventions. This study proposes a formal persuasive agent model that leads to psychological reactance reduction in order to achieve an improved behaviour change intervention in oral care and hygiene. Agent-based simulation methodology is adopted for the development of the proposed model. Evaluation of the model was conducted in two phases that include verification and validation. The verification process involves simulation trace and stability analysis. On the other hand, the validation was carried out using user-centred approach by developing an agent-based application based on belief-desire-intention architecture. This study contributes an agent model which is made up of interrelated cognitive and behavioural factors. Furthermore, the simulation traces provide some insights on the interactions among the identified factors in order to comprehend their roles in behaviour change intervention. The simulation result showed that as time increases, the psychological reactance decreases towards zero. Similarly, the model validation result showed that the percentage of respondents‘ who experienced psychological reactance towards behaviour change in oral care and hygiene was reduced from 100 percent to 3 percent. The contribution made in this thesis would enable agent application and behaviour change intervention designers to make scientific reasoning and predictions. Likewise, it provides a guideline for software designers on the development of agent-based applications that may not have psychological reactance

    A Review of Personality in Human Robot Interactions

    Full text link
    Personality has been identified as a vital factor in understanding the quality of human robot interactions. Despite this the research in this area remains fragmented and lacks a coherent framework. This makes it difficult to understand what we know and identify what we do not. As a result our knowledge of personality in human robot interactions has not kept pace with the deployment of robots in organizations or in our broader society. To address this shortcoming, this paper reviews 83 articles and 84 separate studies to assess the current state of human robot personality research. This review: (1) highlights major thematic research areas, (2) identifies gaps in the literature, (3) derives and presents major conclusions from the literature and (4) offers guidance for future research.Comment: 70 pages, 2 figure

    A Meta-Analysis of Human Personality and Robot Acceptance in Human-Robot Interaction

    Full text link
    Human personality has been identified as a predictor of robot acceptance in the human robot interaction (HRI) literature. Despite this, the HRI literature has provided mixed support for this assertion. To better understand the relationship between human personality and robot acceptance, this paper conducts a meta-analysis of 26 studies. Results found a positive relationship between human personality and robot acceptance. However, this relationship varied greatly by the specific personality trait along with the study sample’s age, gender diversity, task, and global region. This meta-analysis also identified gaps in the literature. Namely, additional studies are needed that investigate both the big five personality traits and other personality traits, examine a more diverse age range, and utilize samples from previously unexamined regions of the globe.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/165339/1/Esterwood et al. 2021 (one column).pdfDescription of Esterwood et al. 2021 (one column).pdf : Preprint one column versionSEL

    Creepy, but Persuasive: In a Virtual Consultation, Physician Bedside Manner, Rather than the Uncanny Valley, Predicts Adherence

    Get PDF
    Care for chronic disease requires patient adherence to treatment advice. Nonadherence worsens health outcomes and increases healthcare costs. When healthcare professionals are in short supply, a virtual physician could serve as a persuasive technology to promote adherence. However, acceptance of advice may be hampered by the uncanny valley effect—a feeling of eeriness elicited by human simulations. In a hypothetical virtual doctor consultation, 441 participants assumed the patient’s role. Variables from the stereotype content model and the heuristic–systematic model were used to predict adherence intention and behavior change. This 2 × 5 between-groups experiment manipulated the doctor’s bedside manner—either good or poor—and virtual depiction at five levels of realism. These independent variables were designed to manipulate the doctor’s level of warmth and eeriness. In hypothesis testing, depiction had a nonsignificant effect on adherence intention and diet and exercise change, even though the 3-D computer-animated versions of the doctor (i.e., animation, swapped, and bigeye) were perceived as eerier than the others (i.e., real and cartoon). The low-warmth, high-eeriness doctor prompted heuristic processing of information, while the high-warmth doctor prompted systematic processing. This pattern contradicts evidence reported in the persuasion literature. For the stereotype content model, a path analysis found that good bedside manner increased the doctor’s perceived warmth significantly, which indirectly increased physical activity. For the heuristic–systematic model, the doctor’s eeriness, measured in a pretest, had no significant effect on adherence intention and physical activity, while good bedside manner increased both significantly. Surprisingly, cognitive perspective-taking was a stronger predictor of change in physical activity than adherence intention. Although virtual characters can elicit the uncanny valley effect, their effect on adherence intention and physical activity was comparable to a video of a real person. This finding supports the development of virtual consultations

    Persoonallisuuspiirteet ja luottamus robotteihin ja tekoälyyn

    Get PDF
    Robottien ja tekoälyn käyttö on lisääntynyt merkittävästi viimeisten vuosikymmenten aikana eri aloilla. Niitä käytetään yhä enemmän tilanteissa, jotka ovat vaarallisia tai jostain muusta syystä ihmisten ulottumattomissa. Erilaiset pelastustehtävät, lääketieteelliset operaatiot, puolustusvoimat tai avaruuden tutkimus ovat vain pieni osa siitä, mihin kaikkeen robottien ja tekoälyn käyttö mahdollistaa. Tilanteissa, joissa ihmiset joutuvat tekemään yhteistyötä robottien kanssa, on luottamus keskeistä ja sekä liian korkea että liian matala luottamus voivat olla kohtalokkaita. Yksi luottamukseen vaikuttavista tekijöistä on ihmisten persoonallisuuspiirteet. Vaikka niiden merkitys luottamukseen vaikuttaa olevan kiistaton, yksittäisten piirteiden vaikutuksesta on aiemmissa tutkimuksissa saatu ristiriitaisia tuloksia. Tämän tutkimuksen tavoitteena oli tutkia persoonallisuuspiirteiden yhteyttä robotteja sekä tekoälyä kohtaan tunnettuun luottamukseen. Samalla tutkittiin iän, sukupuolen, työn ja koulutuksen yhteyttä robotteja ja tekoälyä kohtaan tunnettuun luottamukseen sekä sitä, miten luottamus robotteihin eroaa luottamuksesta tekoälyyn. Tutkimus toteutettiin osana Tampereen yliopiston monitieteistä Robotit ja me: vuorovaikutuksen fysiologinen, psykologinen ja sosiaalinen ulottuvuus -hanketta. Luottamusta tutkittiin luottamuspelin avulla ja yhdysvaltalaisista koostuva aineisto (n = 969) kerättiin verkkokyselynä. Analyysiin käytettiin kuvailevia analyysejä ja lineaarista regressioanalyysiä. Tekoälyryhmässä avoimuuden ja iän havaittiin ennustavan positiivisesti ja tunnollisuuden negatiivisesti luottamusta. Robottiryhmässä ainoastaan avoimuus oli tilastollisesti merkitsevästi yhteydessä luottamuksen kanssa. Havaitut yhteydet eivät olleet voimakkaita, mutta ne olivat tilastollisesti merkitseviä. Osallistujien luottamus ei vaihdellut robottien ja tekoälyn välillä, ja molemmille annettiin suunnilleen yhtä paljon rahaa. Tässä tutkimuksessa saadut tulokset vastaavat osittain aiemmin tehtyjä tutkimuksia. Kirjallisuudessa tulokset ovat kuitenkin paikoin hyvinkin ristiriitaisia ja lisää tutkimuksia tarvitaan etenkin persoonallisuuspiirteiden ja luottamuksen välillä, jotta saataisiin kehitettyä luotettavampia robotteja ja tekoälyä, jotka olisi helpompi hyväksyä

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes

    An Introduction to Ethics in Robotics and AI

    Get PDF
    This open access book introduces the reader to the foundations of AI and ethics. It discusses issues of trust, responsibility, liability, privacy and risk. It focuses on the interaction between people and the AI systems and Robotics they use. Designed to be accessible for a broad audience, reading this book does not require prerequisite technical, legal or philosophical expertise. Throughout, the authors use examples to illustrate the issues at hand and conclude the book with a discussion on the application areas of AI and Robotics, in particular autonomous vehicles, automatic weapon systems and biased algorithms. A list of questions and further readings is also included for students willing to explore the topic further
    corecore