4,157 research outputs found

    Crowd of oz : A crowd-powered social robotics system for stress management

    Get PDF
    Coping with stress is crucial for a healthy lifestyle. In the past, a great deal of research has been conducted to use socially assistive robots as a therapy to alleviate stress and anxiety related problems. However, building a fully autonomous social robot which can deliver psycho-therapeutic solutions is a very challenging endeavor due to limitations in artificial intelligence (AI). To overcome AI’s limitations, researchers have previously introduced crowdsourcing-based teleoperation methods, which summon the crowd’s input to control a robot’s functions. However, in the context of robotics, such methods have only been used to support the object manipulation, navigational, and training tasks. It is not yet known how to leverage real-time crowdsourcing (RTC) to process complex therapeutic conversational tasks for social robotics. To fill this gap, we developed Crowd of Oz (CoZ), an open-source system that allows Softbank’s Pepper robot to support such conversational tasks. To demonstrate the potential implications of this crowd-powered approach, we investigated how effectively, crowd workers recruited in real-time can teleoperate the robot’s speech, in situations when the robot needs to act as a life coach. We systematically varied the number of workers who simultaneously handle the speech of the robot (N = 1, 2, 4, 8) and investigated the concomitant effects for enabling RTC for social robotics. Additionally, we present Pavilion, a novel and open-source algorithm for managing the workers’ queue so that a required number of workers are engaged or waiting. Based on our findings, we discuss salient parameters that such crowd-powered systems must adhere to, so as to enhance their performance in response latency and dialogue quality. © 2020 by the authors. Licensee MDPI, Basel, Switzerland

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Turing-Test Evaluation of a Mobile Haptic Virtual Reality Kissing Machine

    Get PDF
    Various communication systems have been developed to integrate the haptic channel in digital communication. Future directions of such haptic technologies are moving towards realistic virtual reality applications and human-robot social interaction. With the digitisation of touch, robots equipped with touch sensors and actuators can communicate with humans on a more emotional and intimate level, such as sharing a hug or kiss just like humans do. This paper presents the design guideline, implementation and evaluations of a novel haptic kissing machine for smart phones - the Kissenger machine. The key novelties and contributions of the paper are: (i) A novel haptic kissing device for mobile phones, which uses dynamic perpendicular force stimulation to transmit realistic sensations of kissing in order to enhance intimacy and emotional connection of digital communication; (ii) Extensive evaluations of the Kissenger machine, including a lab experiment that compares mediated kissing with Kissenger to real kissing, a unique haptic Turing test that involves the first academic study of humanmachine kiss, and a field study of the effects of Kissenger on long distance relationships

    헬스케어를 위한 대화형 인공지능의 모사된 페르소나 디자인 및 사용자 경험 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 인문대학 협동과정 인지과학전공, 2021.8. 이준환.디지털 헬스케어(Digital Healthcare) 기술의 발전은 일상 헬스케어 영역에서의 혁신을 주도 하고 있다. 이는 의학 전문가들의 정확한 진단, 질병의 치료를 도울 뿐만 아니라 사용자가 스스로 일상에서 자기관리를 할 수 있도록 돕는다. 디지털 헬스케어 기술의 대표적인 목표 중 하나는 효과적으로 헬스케어 서비스를 개인화 시키는 것인데, 이러한 측면에서 대화형 인공지능(Conversational AI)은 사용하기 쉽고 효율적인 비용으로 개인화된 서비스를 제공할 수 있기에 주목받고 있다. 기존의 개인화된 케어 서비스들의 경우는 대부분 의료기관의 질병치료 서비스들에 포함되었는데, 대화형 인공지능은 이러한 개인화된 케어 서비스의 영역을 일상에서의 질병 예방을 위한 관리로 확장하는데 기여한다. 일대일 대화를 통해 맞춤형 교육, 테라피, 그외의 관련 정보 등을 제공할 수 있다는 측면에서 일상 헬스케어에 적합한 디지털 헬스케어 기술로의 활용도가 높다. 이러한 이점으로 인해 다양한 역할을 가진 대화형 인공지능들의 개발이 이루어지고 있다. 그러나, 이러한 대화형 인공지능들에게 사용자의 선호도에 적합한 페르소나를 부여하는 연구는 드물게 이루어 지고 있다. 대화형 인공지능의 주요 기능인 자연어 기반 상호작용은 CASA 패러다임(CASA Paradigm)에서 제기하는 사용자가 시스템을 의인화하는 경향을 높인다. 때문에 페르소나에 대한 사용자의 선호도가 지속적인 대화형 인공지능의 사용과 몰입에 영향을 미친다. 또한 대화형 인공지능의 장기적인 사용을 위해서 적절한 사용자와의 사회적, 감정적 상호작용을 디자인 해 주어야 하는데, 인지된 페르소나에 대한 사용자의 선호도가 이 과정에도 유의미한 영향을 미친다. 때문에 지속적인 참여가 결과에 큰 영향을 미치는 일상 헬스케어 영역에서 대화형 인공지능을 활용하는데 개인화된 페르소나 디자인이 긍정적인 사용자 경험 및 사용자 건강 증진의 가능성을 높일 것으로 본 연구는 가정한다. 개인화된 페르소나 디자인을 위해 사용자와 현실에서 친밀한 관계에 있는 실존인물(호스트)의 페르소나를 대화형 인공지능에 적용하고 평가하는 것이 본 연구의 핵심적인 아아디어이다. 이를 검증하기 위해서 해당 학위 논문은 총 세 가지의 세부 연구를 포함한다. 첫째는 실존인물의 페르소나 중에서도 일상 건강관리에 적합한 호스트의 페르소나를 탐색하는 연구이다. 둘째는 호스트의 페르소나를 대화형 인공지능에 적용하기 위해 고려해야 할 언어적 요소들을 정의하는 연구이다. 마지막으로는 위의 과정을 통해 개발된 실존하는 인물의 페르소나를 가진 대화형 인공지능이 일상 헬스케어 영역에서 실제 효과를 보이는지를 평가하는 연구이다. 또한 해당 학위논문은 일련의 연구들에서 발견한 결과들을 바탕으로 사용자와 친밀한 관계에 있는 페르소나를 일상 헬스케어를 위한 대화형 인공지능에 적용할 때 고려해야할 디자인 함의점들을 도출하고 가이드라인을 제시한다.Advance in digital healthcare technologies has been leading a revolution in healthcare. It has been showing the enormous potential to improve medical professionals’ ability for accurate diagnosis, disease treatment, and the users’ daily self-care. Since the recent transformation of digital healthcare aims to provide effective personalized health services, Conversational AI (CA) is being highlighted as an easy-to-use and cost-effective means to deliver personalized services. Particularly, CA is gaining attention as a mean for personalized care by ingraining positive self-care behavior in a daily manner while previous methods for personalized care are focusing on the medical context. CA expands the boundary of personalized care by enabling one-to-one tailored conversation to deliver health education and healthcare therapies. Due to CA's opportunities as a method for personalized care, it has been implemented with various types of roles including CA for diagnosis, CA for prevention, and CA for therapy. However, there lacks study on the personalization of healthcare CA to meet user's preferences on the CA's persona. Even though the CASA paradigm has been applied to previous studies designing and evaluating the human-likeness of CA, few healthcare CAs personalize its human-like persona except some CAs for mental health therapy. Moreover, there exists the need to improve user experience by increasing social and emotional interaction between the user and the CA. Therefore, designing an acceptable and personalized persona of CA should be also considered to make users to be engaged in the healthcare task with the CA. In this manner, the thesis suggests an idea of applying the persona of the person who is in a close relationship with the user to the conversational CA for daily healthcare as a strategy for persona personalization. The main hypothesis is the idea of applying a close person's persona would improve user engagement. To investigate the hypothesis, the thesis explores if dynamics derived from the social relationship in the real world can be implemented to the relationship between the user and the CA with the persona of a close person. To explore opportunities and challenges of the research idea, series of studies were conducted to (1) explore appropriate host whose persona would be implemented to healthcare CA, (2) define linguistic characteristics to consider when applying the persona of a close person to the CA, and (3)implement CA with the persona of a close person to major lifestyle domains. Based on findings, the thesis provides design guidelines for healthcare CA with the persona of the real person who is in a close relationship with the user.Abstract 1 1 Introduction 12 2 Literature Review 18 2.1 Roles of CA in Healthcare 18 2.2 Personalization in Healthcare CA 23 2.3 Persona Design CA 25 2.4 Methods for Designing Chatbot’s Dialogue Style 30 2.4.1 Wizard of Oz Method 32 2.4.2 Analyzing Dialogue Data with NLP 33 2.4.3 Participatory Design 35 2.4.4 Crowdsourcing 37 3 Goal of the Study 39 4 Study 1. Exploring Candidate Persona for CA 43 4.1 Related Work 44 4.1.1 Need for Support in Daily Healthcare 44 4.1.2 Applying Persona to Text-based CA 45 4.2 Research Questions 47 4.3 Method 48 4.3.1 Wizard of Oz Study 49 4.3.2 Survey Measurement 52 4.3.3 Post Interview 54 4.3.4 Analysis 54 4.4 Results 55 4.4.1 System Acceptance 56 4.4.2 Perceived Trustfulness and Perceived Intimacy 57 4.4.3 Predictive Power of Corresponding Variables 58 4.4.4 Linguistic Factors Affecting User Perception 58 4.5 Implications 60 5 Study 2. Linguistic Characteristics to Consider When Applying Close Person’s Persona to a Text-based Agent 63 5.1 Related Work 64 5.1.1 Linguistic Characteristics and Persona Perception 64 5.1.2 Language Component 66 5.2 Research Questions 68 5.3 Method 69 5.3.1 Modified Wizard of Oz Study 69 5.3.2 Survey 72 5.4 Results 73 5.4.1 Linguistic Characteristics 73 5.4.2 Priority of Linguistic Characteristics 80 5.4.3 Differences between language Component 82 5.5 Implications 82 6 Study3.Implementation on Lifestyle Domains 85 6.1 Related Work 86 6.1.1 Family as Effective Healthcare Provider 86 6.1.2 Chatbots Promoting Healthy Lifestyle 87 6.2 Research questions 94 6.3 Implementing Persona of Family Member 95 6.3.1 Domains of Implementation 96 6.3.2 Measurements Used in the Study 97 6.4 Experiment 1: Food Journaling Chatbot 100 6.4.1 Method 100 6.4.2 Results 111 6.5 Experiment 2: Physical Activity Intervention 128 6.5.1 Method 131 6.5.2 Results 140 6.6 Experiment 3: Chatbot for Coping Stress 149 6.6.1 Method 151 6.6.2 Results 158 6.7 Implications from Domain Experiments 169 6.7.1 Comparing User Experience 170 6.7.2 Comparing User Perception 174 6.7.3 Implications from Study 3 183 7 Discussion 192 7.1 Design Guidelines 193 7.2 Ethical Considerations 200 7.3 Limitations 206 8 Conclusion 210 References 212 Appendix 252 국문초록 262박

    Affective reactions towards socially interactive agents and their computational modeling

    Get PDF
    Over the past 30 years, researchers have studied human reactions towards machines applying the Computers Are Social Actors paradigm, which contrasts reactions towards computers with reactions towards humans. The last 30 years have also seen improvements in technology that have led to tremendous changes in computer interfaces and the development of Socially Interactive Agents. This raises the question of how humans react to Socially Interactive Agents. To answer these questions, knowledge from several disciplines is required, which is why this interdisciplinary dissertation is positioned within psychology and computer science. It aims to investigate affective reactions to Socially Interactive Agents and how these can be modeled computationally. Therefore, after a general introduction and background, this thesis first provides an overview of the Socially Interactive Agent system used in this work. Second, it presents a study comparing a human and a virtual job interviewer, which shows that both interviewers induce shame in participants to the same extent. Thirdly, it reports on a study investigating obedience towards Socially Interactive Agents. The results indicate that participants obey human and virtual instructors in similar ways. Furthermore, both types of instructors evoke feelings of stress and shame to the same extent. Fourth, a stress management training using biofeedback with a Socially Interactive Agent is presented. The study shows that a virtual trainer can teach coping techniques for emotionally challenging social situations. Fifth, it introduces MARSSI, a computational model of user affect. The evaluation of the model shows that it is possible to relate sequences of social signals to affective reactions, taking into account emotion regulation processes. Finally, the Deep method is proposed as a starting point for deeper computational modeling of internal emotions. The method combines social signals, verbalized introspection information, context information, and theory-driven knowledge. An exemplary application to the emotion shame and a schematic dynamic Bayesian network for its modeling are illustrated. Overall, this thesis provides evidence that human reactions towards Socially Interactive Agents are very similar to those towards humans, and that it is possible to model these reactions computationally.In den letzten 30 Jahren haben Forschende menschliche Reaktionen auf Maschinen untersucht und dabei das “Computer sind soziale Akteure”-Paradigma genutzt, in dem Reaktionen auf Computer mit denen auf Menschen verglichen werden. In den letzten 30 Jahren hat sich ebenfalls die Technologie weiterentwickelt, was zu einer enormen Veränderung der Computerschnittstellen und der Entwicklung von sozial interaktiven Agenten geführt hat. Dies wirft Fragen zu menschlichen Reaktionen auf sozial interaktive Agenten auf. Um diese Fragen zu beantworten, ist Wissen aus mehreren Disziplinen erforderlich, weshalb diese interdisziplinäre Dissertation innerhalb der Psychologie und Informatik angesiedelt ist. Sie zielt darauf ab, affektive Reaktionen auf sozial interaktive Agenten zu untersuchen und zu erforschen, wie diese computational modelliert werden können. Nach einer allgemeinen Einführung in das Thema gibt diese Arbeit daher, erstens, einen Überblick über das Agentensystem, das in der Arbeit verwendet wird. Zweitens wird eine Studie vorgestellt, in der eine menschliche und eine virtuelle Jobinterviewerin miteinander verglichen werden, wobei sich zeigt, dass beide Interviewerinnen bei den Versuchsteilnehmenden Schamgefühle in gleichem Maße auslösen. Drittens wird eine Studie berichtet, in der Gehorsam gegenüber sozial interaktiven Agenten untersucht wird. Die Ergebnisse deuten darauf hin, dass Versuchsteilnehmende sowohl menschlichen als auch virtuellen Anleiterinnen ähnlich gehorchen. Darüber hinaus werden durch beide Instruktorinnen gleiche Maße von Stress und Scham hervorgerufen. Viertens wird ein Biofeedback-Stressmanagementtraining mit einer sozial interaktiven Agentin vorgestellt. Die Studie zeigt, dass die virtuelle Trainerin Techniken zur Bewältigung von emotional herausfordernden sozialen Situationen vermitteln kann. Fünftens wird MARSSI, ein computergestütztes Modell des Nutzeraffekts, vorgestellt. Die Evaluation des Modells zeigt, dass es möglich ist, Sequenzen von sozialen Signalen mit affektiven Reaktionen unter Berücksichtigung von Emotionsregulationsprozessen in Beziehung zu setzen. Als letztes wird die Deep-Methode als Ausgangspunkt für eine tiefer gehende computergestützte Modellierung von internen Emotionen vorgestellt. Die Methode kombiniert soziale Signale, verbalisierte Introspektion, Kontextinformationen und theoriegeleitetes Wissen. Eine beispielhafte Anwendung auf die Emotion Scham und ein schematisches dynamisches Bayes’sches Netz zu deren Modellierung werden dargestellt. Insgesamt liefert diese Arbeit Hinweise darauf, dass menschliche Reaktionen auf sozial interaktive Agenten den Reaktionen auf Menschen sehr ähnlich sind und dass es möglich ist diese menschlichen Reaktion computational zu modellieren.Deutsche Forschungsgesellschaf

    An interactive interface for nursing robots.

    Get PDF
    Physical Human-Robot Interaction (pHRI) is inevitable for a human user while working with assistive robots. There are various aspects of pHRI, such as choosing the interface, type of control schemes implemented and the modes of interaction. The research work presented in this thesis concentrates on a health-care assistive robot called Adaptive Robot Nursing Assistant (ARNA). An assistive robot in a health-care environment has to be able to perform routine tasks and be aware of the surrounding environment at the same time. In order to operate the robot, a teleoperation based interaction would be tedious for some patients as it would require a high level of concentration and can cause cognitive fatigue. It would also require a learning curve for the user in order to teleoperate the robot efficiently. The research work involves the development of a proposed Human-Machine Interface (HMI) framework which integrates the decision-making module, interaction module, and a tablet interface module. The HMI framework integrates a traded control based interaction which allows the robot to take decisions on planning and executing a task while the user only has to specify the task through a tablet interface. According to the preliminary experiments conducted as a part of this thesis, the traded control based approach allows a novice user to operate the robot with the same efficiency as an expert user. Past researchers have shown that during a conversation with a speech interface, a user would feel disengaged if the answers received from the interface are not in the context of the conversation. The research work in this thesis explores the different possibilities of implementing a speech interface that would be able to reply to any conversational queries from the user. A speech interface was developed by creating a semantic space out of Wikipedia database using Latent Semantic Analysis (LSA). This allowed the speech interface to have a wide knowledge-base and be able to maintain a conversation in the same context as intended by the user. This interface was developed as a web-service and was deployed on two different robots to exhibit its portability and the ease of implementation with any other robot. In the work presented, a tablet application was developed which integrates speech interface and an onscreen button interface to execute tasks through ARNA robot. This tablet interface application can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provide conversational dialogue during sitting sessions. In this thesis, we present the software and hardware framework that enable a patient sitter HMI, and together with experimental results with a small number of users that demonstrate that the concept is sound and scalable

    Designing online social interaction for and with older people

    Get PDF
    This thesis describes my explorations and reflections regarding the design of online social interaction for and with older people. In 2008 when I started my doctoral investigation only a third of people over 65 years in the UK were using the Internet. This number has now increased to half of the population of 65-75 year-olds being connected to the Internet. From 2000 onwards EU wide directives increasingly encouraged research in the development of online technologies to manage the needs of an ageing population in the EU. Alongside health-related risks, the issue of social isolation is of particular interest to be tackled, considering there is a rapid development of new forms of communication and interaction media based on online technologies that could help in maintaining contact between people. A beneficial design strategy is to involve older people in the design process to ensure that technological developments are welcomed and actually used. However, engaging older people, who are not necessarily familiar with digital technologies, is not without challenges for the design researcher. My research focuses both on design practice (the development of artefacts) and the design process for online social interaction involving older people. The thesis describes practice-led research, for which I built the Teletalker (TT) and Telewalker (TW) systems as prototypes for experimentation and design research interventions. The TT can be described as a simple TV like online audio-video presence system connecting two locations. The TW is based on the same concept has been built specifically for vulnerable older people living in a care home. The work described involves embodied real-world interventions with contemporary approaches to designing with people. In particular I explore the delicate nature of the researcher/participant relationship. The research is reported as four sequential journeys. The first design journey started from a user-centred iterative design perspective and resulted in the construction of a wireframe for a website for older users. The second journey focused on building the TT and investigated its use in the real world by people with varied computer experience. The third journey involved designing the TW system specifically for elderly people in a care home. The fourth journey employed a co-design approach, with invited stakeholders, to reflect on the physical artefacts, discuss narratives of the previous design journeys and to co-create new online social technologies for the future. In summary, my PhD thesis contributes to design theory by providing: a reflected rationale for the choices of design approaches, documented examples of design research for social interaction and a novel approach to research with older people (the extended showroom). It further offers insights into people's online social interaction and proposes guidelines for conducting empirical research with older and vulnerable older people

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, Gesichtsausdrücke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzählige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, Führung, oder der Qualität des Verhältnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv während ihres täglichen sozialen Lebens von Maschinen unterstützt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. Darüber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein Stück weit mehr Realität werden zu lassen. Diese Arbeit liefert wichtige Beiträge zur autmatischen Erkennung menschlichen Blickverhaltens in alltäglichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unüberwachte Methoden zur Augenkontakterkennung bisher lediglich für dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. Tägliche Aktivitäten sind eine Herausforderung für Geräte zur mobile Augenbewegungsmessung, da Verschiebungen dieser Geräte zur Verschlechterung ihrer Kalibrierung führen können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen Endgeräten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter Ausrüstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und führen die erste datensatzübergreifende Evaluierung zur Detektion von sich entwickelndem Führungsverhalten durch. Zum Abschluss der Arbeit präsentieren wir die ersten Ansätze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die Fähigkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufügen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu übersehen. Wir präsentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen Endgeräten während täglicher Aktivitäten, als auch während dyadischer Interaktionen mittels Videotelefonie
    corecore