82 research outputs found

    HREyes: Design, Development, and Evaluation of a Novel Method for AUVs to Communicate Information and Gaze Direction

    Full text link
    We present the design, development, and evaluation of HREyes: biomimetic communication devices which use light to communicate information and, for the first time, gaze direction from AUVs to humans. First, we introduce two types of information displays using the HREye devices: active lucemes and ocular lucemes. Active lucemes communicate information explicitly through animations, while ocular lucemes communicate gaze direction implicitly by mimicking human eyes. We present a human study in which our system is compared to the use of an embedded digital display that explicitly communicates information to a diver by displaying text. Our results demonstrate accurate recognition of active lucemes for trained interactants, limited intuitive understanding of these lucemes for untrained interactants, and relatively accurate perception of gaze direction for all interactants. The results on active luceme recognition demonstrate more accurate recognition than previous light-based communication systems for AUVs (albeit with different phrase sets). Additionally, the ocular lucemes we introduce in this work represent the first method for communicating gaze direction from an AUV, a critical aspect of nonverbal communication used in collaborative work. With readily available hardware as well as open-source and easily re-configurable programming, HREyes can be easily integrated into any AUV with the physical space for the devices and used to communicate effectively with divers in any underwater environment with appropriate visibility.Comment: Under submission at ICRA2

    Design, Control, and Evaluation of a Human-Inspired Robotic Eye

    Get PDF
    Schulz S. Design, Control, and Evaluation of a Human-Inspired Robotic Eye. Bielefeld: Universität Bielefeld; 2020.The field of human-robot interaction deals with robotic systems that involve humans and robots closely interacting with each other. With these systems getting more complex, users can be easily overburdened by the operation and can fail to infer the internal state of the system or its ”intentions”. A social robot, replicating the human eye region with its familiar features and movement patterns, that are the result of years of evolution, can counter this. However, the replication of these patterns requires hard- and software that is able to compete with the human characteristics and performance. Comparing previous systems found in literature with the human capabili- ties reveal a mismatch in this regard. Even though individual systems solve single aspects, the successful combination into a complete system remains an open challenge. In contrast to previous work, this thesis targets to close this gap by viewing the system as a whole — optimizing the hard- and software, while focusing on the replication of the human model right from the beginning. This work ultimately provides a set of interlocking building blocks that, taken together, form a complete end-to-end solution for the de- sign, control, and evaluation of a human-inspired robotic eye. Based on the study of the human eye, the key driving factors are identified as the success- ful combination of aesthetic appeal, sensory capabilities, performance, and functionality. Two hardware prototypes, each based on a different actua- tion scheme, have been developed in this context. Furthermore, both hard- ware prototypes are evaluated against each other, a previous prototype, and the human by comparing objective numbers obtained by real-world mea- surements of the real hardware. In addition, a human-inspired and model- driven control framework is developed out, again, following the predefined criteria and requirements. The quality and human-likeness of the motion, generated by this model, is evaluated by means of a user study. This frame- work not only allows the replication of human-like motion on the specific eye prototype presented in this thesis, but also promotes the porting and adaption to less equipped humanoid robotic heads. Unlike previous systems found in literature, the presented approach provides a scaling and limiting function that allows intuitive adjustments of the control model, which can be used to reduce the requirements set on the target platform. Even though a reduction of the overall velocities and accelerations will result in a slower motion execution, the human characteristics and the overall composition of the interlocked motion patterns remain unchanged

    Autonomous behaviour in tangible user interfaces as a design factor

    Get PDF
    PhD ThesisThis thesis critically explores the design space of autonomous and actuated artefacts, considering how autonomous behaviours in interactive technologies might shape and influence users’ interactions and behaviours. Since the invention of gearing and clockwork, mechanical devices were built that both fascinate and intrigue people through their mechanical actuation. There seems to be something magical about moving devices, which draws our attention and piques our interest. Progress in the development of computational hardware is allowing increasingly complex commercial products to be available to broad consumer-markets. New technologies emerge very fast, ranging from personal devices with strong computational power to diverse user interfaces, like multi-touch surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they comprise sensing, controlling and actuation. From this, new opportunities arise in integrating more sensors and technology in physical objects. These trends raise some specific questions around the impacts smarter systems might have on people and interaction: how do people perceive smart systems that are tangible and what implications does this perception have for user interface design? Which design opportunities are opened up through smart systems? There is a tendency in humans to attribute life-like qualities onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be possible to build user interfaces that utilise such behaviours to motivate people towards frequent use, or even motivate them to build relationships in which the users care for their devices. Their aim is not to increase the efficiency of user interfaces, but to create interfaces that are more engaging to interact with and excite people to bond with these tangible objects. This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I am interested in the factors that make a user interpret an interface as autonomous. Through a review of literature concerned with animated objects, autonomous technology and robots, I have mapped out a design space exploring the factors that are important in developing autonomous interfaces. Building on this and utilising workshops conducted with other researchers, I have vi developed a framework that identifies key elements for the design of Tangible Autonomous Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the impacts on users of interacting with autonomous interfaces I have adopted a ‘research through design’ approach. I have iteratively designed and realised a series of autonomous, interactive prototypes, which demonstrate the potential of such interfaces to establish themselves as social entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces. My studies revealed that through their autonomous behaviour (guided by the framework) these devices established themselves, in interaction, as social entities. They furthermore turned out to be acceptable, especially if people were able to find a purpose for them in their lives. This thesis closes with a discussion of findings and provides specific implications for design of autonomous behaviour in interfaces

    Machine Performers: Agents in a Multiple Ontological State

    Get PDF
    In this thesis, the author explores and develops new attributes for machine performers and merges the trans-disciplinary fields of the performing arts and artificial intelligence. The main aim is to redefine the term “embodiment” for robots on the stage and to demonstrate that this term requires broadening in various fields of research. This redefining has required a multifaceted theoretical analysis of embodiment in the field of artificial intelligence (e.g. the uncanny valley), as well as the construction of new robots for the stage by the author. It is hoped that these practical experimental examples will generate more research by others in similar fields. Even though the historical lineage of robotics is engraved with theatrical strategies and dramaturgy, further application of constructive principles from the performing arts and evidence from psychology and neurology can shift the perception of robotic agents both on stage and in other cultural environments. In this light, the relation between representation, movement and behaviour of bodies has been further explored to establish links between constructed bodies (as in artificial intelligence) and perceived bodies (as performers on the theatrical stage). In the course of this research, several practical works have been designed and built, and subsequently presented to live audiences and research communities. Audience reactions have been analysed with surveys and discussions. Interviews have also been conducted with choreographers, curators and scientists about the value of machine performers. The main conclusions from this study are that fakery and mystification can be used as persuasive elements to enhance agency. Morphologies can also be applied that tightly couple brain and sensorimotor actions and lead to a stronger stage presence. In fact, if this lack of presence is left out of human replicants, it causes an “uncanny” lack of agency. Furthermore, the addition of stage presence leads to stronger identification from audiences, even for bodies dissimilar to their own. The author demonstrates that audience reactions are enhanced by building these effects into machine body structures: rather than identification through mimicry, this causes them to have more unambiguously biological associations. Alongside these traits, atmospheres such as those created by a cast of machine performers tend to cause even more intensely visceral responses. In this thesis, “embodiment” has emerged as a paradigm shift – as well as within this shift – and morphological computing has been explored as a method to deepen this visceral immersion. Therefore, this dissertation considers and builds machine performers as “true” performers for the stage, rather than mere objects with an aura. Their singular and customized embodiment can enable the development of non-anthropocentric performances that encompass the abstract and conceptual patterns in motion and generate – as from human performers – empathy, identification and experiential reactions in live audiences

    Mapping Beyond the Uncanny Valley: A Delphi Study on Aiding Adoption of Realistic Digital Faces

    Get PDF
    Developers and HCI researchers have long strived to create digital agents that are more realistic. Voice-only versions are now common, but there has been a lack of visually realistic agents. A key barrier is the “Uncanny Valley”, referring to aversion being triggered if agents are not quite realistic. To gain understanding of the challenges of the Uncanny Valley in creating realistic agents, we conducted a Delphi study. For the Delphi panel, we recruited 13 leading international experts in the area of digital humans. They participated in three rounds of qualitative interviews. We aimed to transfer their knowledge from the entertainment industry to HCI researchers. Our findings include the unexpected conclusion that the panel considered the challenges of final rendering was not a key problem. Instead, modeling and rigging were highlighted, and a new dimension of interactivity was revealed as important. Our results provide a set of research directions for those engaged in HCI-oriented information systems using realistic digital humans

    Mapping Beyond the Uncanny Valley: A Delphi Study on Aiding Adoption of Realistic Digital Faces

    Get PDF
    Developers and HCI researchers have long strived to create digital agents that are more realistic. Voice-only versions are now common, but there has been a lack of visually realistic agents. A key barrier is the “Uncanny Valley”, referring to aversion being triggered if agents are not quite realistic. To gain understanding of the challenges of the Uncanny Valley in creating realistic agents, we conducted a Delphi study. For the Delphi panel, we recruited 13 leading international experts in the area of digital humans. They participated in three rounds of qualitative interviews. We aimed to transfer their knowledge from the entertainment industry to HCI researchers. Our findings include the unexpected conclusion that the panel considered the challenges of final rendering was not a key problem. Instead, modeling and rigging were highlighted, and a new dimension of interactivity was revealed as important. Our results provide a set of research directions for those engaged in HCI-oriented information systems using realistic digital humans

    Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction

    Get PDF
    Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: Universität Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robot’s movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication

    Implications of the uncanny valley of avatars and virtual characters for human-computer interaction

    Get PDF
    Technological innovations made it possible to create more and more realistic figures. Such figures are often created according to human appearance and behavior allowing interaction with artificial systems in a natural and familiar way. In 1970, the Japanese roboticist Masahiro Mori observed, however, that robots and prostheses with a very - but not perfect - human-like appearance can elicit eerie, uncomfortable, and even repulsive feelings. While real people or stylized figures do not seem to evoke such negative feelings, human depictions with only minor imperfections fall into the "uncanny valley," as Mori put it. Today, further innovations in computer graphics led virtual characters into the uncanny valley. Thus, they have been subject of a number of disciplines. For research, virtual characters created by computer graphics are particularly interesting as they are easy to manipulate and, thus, can significantly contribute to a better understanding of the uncanny valley and human perception. For designers and developers of virtual characters such as in animated movies or games, it is important to understand how the appearance and human-likeness or virtual realism influence the experience and interaction of the user and how they can create believable and acceptable avatars and virtual characters despite the uncanny valley. This work investigates these aspects and is the next step in the exploration of the uncanny valley. This dissertation presents the results of nine studies examining the effects of the uncanny valley on human perception, how it affects interaction with computing systems, which cognitive processes are involved, and which causes may be responsible for the phenomenon. Furthermore, we examine not only methods for avoiding uncanny or unpleasant effects but also the preferred characteristics of virtual faces. We bring the uncanny valley into context with related phenomena causing similar effects. By exploring the eeriness of virtual animals, we found evidence that the uncanny valley is not only related to the dimension of human-likeness, which significantly change our view on the phenomenon. Furthermore, using advanced hand tracking and virtual reality technologies, we discovered that avatar realism is connected to other factors, which are related to the uncanny valley and depend on avatar realism. Affinity with the virtual ego and the feeling of presence in the virtual world were also affected by gender and deviating body structures such as a reduced number of fingers. Considering the performance while typing on keyboards in virtual reality, we also found that the perception of the own avatar depends on the user's individual task proficiencies. This thesis concludes with implications that not only extends existing knowledge about virtual characters, avatars and the uncanny valley but also provide new design guidelines for human-computer interaction and virtual reality
    corecore