24 research outputs found

    Design, Control, and Evaluation of a Human-Inspired Robotic Eye

    Get PDF
    Schulz S. Design, Control, and Evaluation of a Human-Inspired Robotic Eye. Bielefeld: UniversitĂ€t Bielefeld; 2020.The field of human-robot interaction deals with robotic systems that involve humans and robots closely interacting with each other. With these systems getting more complex, users can be easily overburdened by the operation and can fail to infer the internal state of the system or its ”intentions”. A social robot, replicating the human eye region with its familiar features and movement patterns, that are the result of years of evolution, can counter this. However, the replication of these patterns requires hard- and software that is able to compete with the human characteristics and performance. Comparing previous systems found in literature with the human capabili- ties reveal a mismatch in this regard. Even though individual systems solve single aspects, the successful combination into a complete system remains an open challenge. In contrast to previous work, this thesis targets to close this gap by viewing the system as a whole — optimizing the hard- and software, while focusing on the replication of the human model right from the beginning. This work ultimately provides a set of interlocking building blocks that, taken together, form a complete end-to-end solution for the de- sign, control, and evaluation of a human-inspired robotic eye. Based on the study of the human eye, the key driving factors are identified as the success- ful combination of aesthetic appeal, sensory capabilities, performance, and functionality. Two hardware prototypes, each based on a different actua- tion scheme, have been developed in this context. Furthermore, both hard- ware prototypes are evaluated against each other, a previous prototype, and the human by comparing objective numbers obtained by real-world mea- surements of the real hardware. In addition, a human-inspired and model- driven control framework is developed out, again, following the predefined criteria and requirements. The quality and human-likeness of the motion, generated by this model, is evaluated by means of a user study. This frame- work not only allows the replication of human-like motion on the specific eye prototype presented in this thesis, but also promotes the porting and adaption to less equipped humanoid robotic heads. Unlike previous systems found in literature, the presented approach provides a scaling and limiting function that allows intuitive adjustments of the control model, which can be used to reduce the requirements set on the target platform. Even though a reduction of the overall velocities and accelerations will result in a slower motion execution, the human characteristics and the overall composition of the interlocked motion patterns remain unchanged

    Integrating an Emotion Recognition Model for the Flobi System

    Get PDF
    This thesis investigates if emotional states of users interacting with a virtual robot can be recognized reliably and if specific interaction strategy can change the users’ emotional state and affect users’ risk decision. For this investigation, the OpenFace [1] emotion recognition model was intended to be integrated into the Flobi [2] system, to allow the agent to be aware of the current emotional state of the user and to react appropriately. There was an open source ROS [3] bridge available online to integrate OpenFace to the Flobi simulation but it was not consistent with some other projects in Flobi distribution. Then due to technical reasons DeepFace was selected. In a human-agent interaction, the system is compared to a system without using emotion recognition. Evaluation could happen at different levels: evaluation of emotion recognition model, evaluation of the interaction strategy, and evaluation of effect of interaction on user decision. The results showed that the happy emotion induction was 58% and fear emotion induction 77% successful. Risk decision results show that: in happy induction after interaction 16.6% of participants switched to a lower risk decision and 75% of them did not change their decision and the remaining switched to a higher risk decision. In fear inducted participants 33.3% decreased risk 66.6 % did not change their decision The emotion recognition accuracy was and had bias to. The sensitivity and specificity is calculated for each emotion class. The emotion recognition model classifies happy emotions as neutral in most of the time

    Can(’t) wait to have a robot at home? – Japanese and German users’ attitudes toward service robots in smart homes

    Get PDF
    Bernotat J, Eyssel FA. Can(’t) wait to have a robot at home? – Japanese and German users’ attitudes toward service robots in smart homes. In: Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 2018: 15-22

    Modeling Human-Robot-Interaction based on generic Interaction Patterns

    Get PDF
    Peltason J. Modeling Human-Robot-Interaction based on generic Interaction Patterns. Bielefeld: Bielefeld University; 2014

    On the profoundness and preconditions of social responses towards social robots : experimental investigations using indirect measurement techniques

    Get PDF
    Riether N. On the profoundness and preconditions of social responses towards social robots : experimental investigations using indirect measurement techniques. Bielefeld: UniversitÀt Bielefeld; 2013

    Interactive Robot for Playing Russian Checkers

    Get PDF
    Human\u2013robot interaction in board games is a rapidly developing field of robotics. This paper presents a robot capable of playing Russian checkers designed for entertaining, training, and research purposes. Its control program is based on a novel unsupervised self-learning algorithm inspired by AlphaZero and represents the first successful attempt of using this approach in the checkers game. The main engineering challenge in mechanics is to develop a board state acquisition system non-sensitive to lighting conditions, which is achieved by rejecting computer vision and utilizing magnetic sensors instead. An original robot face is designed to endow the robot an ability to express its attributed emotional state. Testing the robot at open-air multiday exhibitions shows the robustness of the design to difficult exploitation conditions and the high interest of visitors to the robot

    Shybo. Design of a research artifact for human-robot interaction studies.

    Get PDF
    This article discusses the role of Design Research in the field of Human-Robot Interaction (HRI). Notably, the Research through Design (RtD) approach is proposed as a valuable method to develop HRI research artefacts due to the importance of having a physical artefact, a robot, that enables direct interaction. Moreover, there is a growing interest in HRI for design methodologies as methods for investigation. The article presents an example of a design process, focused on hands-on activities, namely sketching, 3D modelling, prototyping, and documenting. These making practices were applied to the development of Shybo, a small sound-reactive robot for children. Particular attention has been given to the five prototypes that led to the definition of the current solution. Morphological, behavioral, and interaction aspects were investigated throughout the whole process. Each phase of the design process was then documented with the intent of sharing potentially replicable practices and contributing to the understanding of the role that RtD can play in HRI

    Robot-aren itxura estetikoak eta erabiltzaileen preferentziak

    Get PDF
    Datozen urteetan roboten eta pertsonen arteko bizikidetza handitzea espero da, eta ondorioz, beraien arteko interakzioa optimizatzea beharrezkoa izango da. Robotaren itxura estetikoa bere gaitasunen inguruko informazioa jasotzeko modurik ulergarriena da. Robot askok, pertsonekin hobeto interaktuatzeko itxura humanoidea izaten dute, era honetan pertsonen enpatia handitu egiten baita. Hala ere, robot humanoide hauen artean, estetika aldetik bi tendentzia aurki ditzakegu: itxura teknologikoa izaten dutenak eta pertsona itxura erreala dutenak. Pertsonen preferentzia inplizitua zein den jakiteko helburuarekin Asoziazio Inplizituen Testa (IAT) burutu da. Test honek pertsonen preferentziak ezagutzea ahalbidetu du, bai inplizituki eta baita esplizituki ere. Neurketa inplizituaren emaitzan giza itxurarekiko preferentzia nabarmendu da, eta, neurketa esplizituan, aldiz, itxura teknologikoarekiko preferentzia. Emaitzetan lortu den kontraesan honek etorkizuneko ikerketarako ildo interesgarriak azaleratzen ditu.In coming years, the coexistence between robots and humans is expected to increase, and therefore, it will be necessary to optimize human-robot interaction. Robot aesthetics is the most understandable way to display information about a robot's capabilities. Many robots that are intended to interact with people often have humanoid aesthetics, because in this way, they increase people's empathy. However, there are two trends within humanoid aesthetics: those with technological aesthetics and those with real-person aesthetics. In order to find out people's implicit preference, an Implicit Association Test (IAT) has been carried out. This test enabled us to find out people's preferences, both implicitly and explicitly. The implicit measure had shown a preference for the real-person aesthetics, and, on the contrary, the explicit measure had shown a preference for the technological aesthetics. This contradiction in the results indicates an interesting future line for further research

    Accessibility requirements for human-robot interaction for socially assistive robots

    Get PDF
    Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informåtica por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Våzquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal

    Ein Computermodell fĂŒr die Simulation von emotionalen Angleichungsprozessen in der Mensch-Roboter Interaktion

    Get PDF
    Damm O. Ein Computermodell fĂŒr die Simulation von emotionalen Angleichungsprozessen in der Mensch-Roboter Interaktion. Bielefeld: UniversitĂ€t Bielefeld; 2014.Es gibt seit ĂŒber 20 Jahren unterschiedliche AnsĂ€tze, virtuelle Agenten und humanoide Roboter sozialer und menschlicher erscheinen zu lassen. Um diesem Ziel nĂ€her zu kommen, wird in unterschiedliche Richtungen geforscht. Die kĂŒnstlichen Interaktionspartner haben das Hören und Sprechen gelernt, um die Interaktion angenehmer und einfacher zu machen. Es wurden Modelle entwickelt die es möglich machen, komplexe Dialoge mit ihnen zu fĂŒhren und nicht zuletzt wurden unterschiedliche Emotionsmodelle implementiert. Viele Modelle von artifiziellen Emotionen versuchen ĂŒber unterschiedlichen Input einen internen emotionalen Zustand zu errechnen und diesen durch den Roboter darzustellen. Diese Modelle reichen vom diskreten OCC-Modell, bei dem Emotionen eine wertende Reaktion auf Konsequenzen von Ereignissen, Handlungen von Agenten oder Aspekte von Objekten sind, bis hin zu multidimensionalen Modellen die versuchen natĂŒrliche Emotionen zu simulieren. FĂŒr das OCC-Modell bedeutet das zum Beispiel, dass eine Handlung mehr oder weniger zu einer Emotion fĂŒhrt. Bei den dimensionalen Modellen wird der emotionale Zustand ĂŒber einen Punkt in einem 3 dimensionalen Raum modelliert. Dieser Punkt wird durch wahrgenommene Handlungen oder Ereignisse im Raum bewegt. Die Emotionen sind unterschiedlichen Bereichen in diesem Raum zugeordnet. Das hier vorgestellte Modell basiert auf Erkenntnissen, die zuvor in mehreren empirischen Studien gewonnen wurden. Es simuliert emotionale Angleichungsprozesse, die in der Mensch-Mensch Interaktion beobachtet werden können. Es wird also nicht versucht, dem Roboter einen emotionalen Zustand zu ”geben“, vielmehr liegt der Focus auf der Interaktion und der Wirkung einer gezeigten Emotion auf eben diese. DafĂŒr wurde ein Ebenen-Modell implementiert, das dem Roboter ermöglicht in unterschiedlichen Situationen emotional angemessen zu reagieren. Es setzt (Facial) Mimicry ein, damit der Roboter positiver wahrgenommen wird und ein Social Bonding zu etablieren. Des Weiteren werden Emotionen eingesetzt, um die Interaktion gezielt zu beeinflussen und um auf Ereignisse, die nicht direkt zur Interaktion gehören, zu reagieren
    corecore