8 research outputs found

    SOCIAL ROBOTS / SOCIAL COGNITION : Robots' Gaze Effects in Older and Younger Adults

    No full text
    This dissertation presents advances in social human-robot interaction (HRI) and human social cognition through a series of experiments in which humans face a robot. A predominant approach to studying the human factor in HRI consists of placing the human in the role of a user to explore potential factors affecting the acceptance or usability of a robot. This work takes a broader perspective and investigates if social robots are perceived as social agents, irrespective of their final role or usefulness in a particular interaction. To do so, it adopts methodologies and theories from cognitive and experimental psychology, such as the use of behavioral paradigms involving gaze following and a framework of more than twenty years of research employing gaze to explore social cognition. The communicative role of gaze in robots is used to explore their essential effectiveness and as a tool to learn how humans perceive them. Studying how certain social robots are perceived through the lens of research in social cognition is the central contribution of this dissertation. This thesis presents empirical research and the multidisciplinary literature on (robotic) gaze following, aging, and their relation with social cognition. Papers I and II investigate the decline in gaze following associated with aging, linked with a broader decline in social cognition, in scenarios with robots as gazing agents. In addition to the participants' self-reported perception of the robots, their reaction times were also measured to reflect their internal cognitive processes. Overall, this decline seems to persist when the gazing Overall, this decline seems to persist when the gazing agent is a robot, highlighting our depiction of robots as social agents. Paper IV explores the theories behind this decline using a robot, emphasizing how these theories extend to non-human agents. This work also investigates motion as a competing cue to gaze in social robots (Paper III), and mentalizing in robotic gaze following (Paper V). Through experiments with participants and within the scope of HRI and social cognition studies, this thesis presents a joint framework highlighting that robots are depicted as social agents. This finding emphasizes the importance of fundamental insights from social cognition when designing robot behaviors. Additionally, it promotes and supports the use of robots as valuable tools to explore the robustness of current theories in cognitive psychology to expand the field in parallel

    SOCIAL ROBOTS / SOCIAL COGNITION : Robots' Gaze Effects in Older and Younger Adults

    No full text
    This dissertation presents advances in social human-robot interaction (HRI) and human social cognition through a series of experiments in which humans face a robot. A predominant approach to studying the human factor in HRI consists of placing the human in the role of a user to explore potential factors affecting the acceptance or usability of a robot. This work takes a broader perspective and investigates if social robots are perceived as social agents, irrespective of their final role or usefulness in a particular interaction. To do so, it adopts methodologies and theories from cognitive and experimental psychology, such as the use of behavioral paradigms involving gaze following and a framework of more than twenty years of research employing gaze to explore social cognition. The communicative role of gaze in robots is used to explore their essential effectiveness and as a tool to learn how humans perceive them. Studying how certain social robots are perceived through the lens of research in social cognition is the central contribution of this dissertation. This thesis presents empirical research and the multidisciplinary literature on (robotic) gaze following, aging, and their relation with social cognition. Papers I and II investigate the decline in gaze following associated with aging, linked with a broader decline in social cognition, in scenarios with robots as gazing agents. In addition to the participants' self-reported perception of the robots, their reaction times were also measured to reflect their internal cognitive processes. Overall, this decline seems to persist when the gazing Overall, this decline seems to persist when the gazing agent is a robot, highlighting our depiction of robots as social agents. Paper IV explores the theories behind this decline using a robot, emphasizing how these theories extend to non-human agents. This work also investigates motion as a competing cue to gaze in social robots (Paper III), and mentalizing in robotic gaze following (Paper V). Through experiments with participants and within the scope of HRI and social cognition studies, this thesis presents a joint framework highlighting that robots are depicted as social agents. This finding emphasizes the importance of fundamental insights from social cognition when designing robot behaviors. Additionally, it promotes and supports the use of robots as valuable tools to explore the robustness of current theories in cognitive psychology to expand the field in parallel

    Augmented Reality as an Advanced Driver-Assistance System : A Cognitive Approach

    No full text
    AR is progressively being implemented in the automotive domain as an ADAS system. This increasingly popular technology has the potential to reduce the fatalities on the road which involve HF, however the cognitive components of AR are still being studied. This review provides a quick overview of the studies related with the cognitive mechanisms involved in AR while driving to date. Related research is varied, a taxonomy of the outcomes is provided. AR systems should follow certain criteria to avoid undesirable outcomes such as cognitive capture. Only information related with the main driving task should be shown to the driver in order to avoid occlusion of the real road by non-driving related tasks and high mental workload. However, information should not be shown at all times so it does not affect the driving skills of the users and they do not develop overreliance in the system, which may lead to risky behaviours. Some popular uses of AR in the car are navigation and as safety system (i.e. BSD or FCWS). AR cognitive outcomes should be studied in these particular contexts in the future. This article is intended as a mini-guide for manufacturers and designers in order to improve the quality and the efficiency of the systems that are currently being developed

    Augmented Reality as an Advanced Driver-Assistance System : A Cognitive Approach

    No full text
    AR is progressively being implemented in the automotive domain as an ADAS system. This increasingly popular technology has the potential to reduce the fatalities on the road which involve HF, however the cognitive components of AR are still being studied. This review provides a quick overview of the studies related with the cognitive mechanisms involved in AR while driving to date. Related research is varied, a taxonomy of the outcomes is provided. AR systems should follow certain criteria to avoid undesirable outcomes such as cognitive capture. Only information related with the main driving task should be shown to the driver in order to avoid occlusion of the real road by non-driving related tasks and high mental workload. However, information should not be shown at all times so it does not affect the driving skills of the users and they do not develop overreliance in the system, which may lead to risky behaviours. Some popular uses of AR in the car are navigation and as safety system (i.e. BSD or FCWS). AR cognitive outcomes should be studied in these particular contexts in the future. This article is intended as a mini-guide for manufacturers and designers in order to improve the quality and the efficiency of the systems that are currently being developed

    Gaze cueing in older and younger adults is elicited by a social robot seen from the back

    No full text
    The ability to follow the gaze of others deteriorates with age. This decline is typically tested with gaze cueing tasks, in which the time it takes to respond to targets on a screen is faster when they are preceded by a facial cue looking in the direction of the target (i.e., gaze cueing effect). It is unclear whether age-related differences in this effect occur with gaze cues other than the eyes, such as head orientation, and how these vary in function of the cue-target timing. Based on the perceived usefulness of social robots to assist older adults, we asked older and young adults to perform a gaze cueing task with the head of a NAO robot as the central cue. Crucially, the head was viewed from the back, and so its eye gaze was conveyed. In a control condition, the head was static and faced away from the participant. The stimulus onset asynchrony (SOA) between cue and target was 340 ms or 1000 ms. Both age groups showed a gaze cueing effect at both SOAs. Older participants showed a reduced facilitation effect (i.e., faster on congruent gazing trials than on neutral trials) at the 340-ms SOA compared to the 1000-ms SOA, and no differences between incongruent trials and neutral trials at the 340-ms SOA. Our results show that a robot with non-visible eyes can elicit gaze cueing effects. Age-related differences in the other effects are discussed regarding differences in processing time.Funding agency:Spanish Ministerio de Ciencia, RobWellproject RTI2018-095599-A-C22</p

    Robotic Gaze Drives Attention, Even with No Visible Eyes

    No full text
    Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.Funding agency:Spanish Ministerio de Ciencia, Innovación y Universidades, RobWell project (No RTI2018-095599-A-C22)</p

    The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction

    No full text
    Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, andefficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robotsin industrial environments are usually not anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates driving an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.CC BY-SA 4.0This work was supported by the European Union’s Horizon 2020 researchand innovation program under grant agreement No. 101017274 (DARKO) and grant agreement No. 754285</p

    The Magni Human Motion Dataset : Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized

    No full text
    Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment. DARK
    corecore