239 research outputs found

    Persuasiveness of social robot ‘Nao’ based on gaze and proximity

    Get PDF
    Social Robots have widely infiltrated the retail and public space. Mainly, social robots are being utilized across a wide range of scenarios to influence decision making, disseminate information, and act as a signage mechanism, under the umbrella of Persuasive Robots or Persuasive Technology. While there have been several studies in the afore-mentioned area, the effect of non-verbal behaviour on persuasive abilities is generally unexplored. Therefore, in this research, we report whether two key non-verbal attributes, namely proximity and gaze, can elicit persuasively, compliance, and specific personality appeals. For this, we conducted a 2 (eye gaze) x 2 (proximity) between-subjects experiment where participants viewed a video-based scenario of the Nao robot. Our initial results did not reveal any significant results based on the non-verbal attributes. However, perceived compliance and persuasion were significantly correlated with knowledge, responsiveness, and trustworthiness. In conclusion, we discuss how the design of a robot could make it more convincing as extensive marketing and brand promotion companies could use robots to enhance their advertisement operations

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    Teaching Unknown Objects by Leveraging Human Gaze and Augmented Reality in Human-Robot Interaction

    Get PDF
    Roboter finden aufgrund ihrer außergewöhnlichen Arbeitsleistung, PrĂ€zision, Effizienz und Skalierbarkeit immer mehr Verwendung in den verschiedensten Anwendungsbereichen. Diese Entwicklung wurde zusĂ€tzlich begĂŒnstigt durch Fortschritte in der KĂŒnstlichen Intelligenz (KI), insbesondere im Maschinellem Lernen (ML). Mit Hilfe moderner neuronaler Netze sind Roboter in der Lage, Objekte in ihrer Umgebung zu erkennen und mit ihnen zu interagieren. Ein erhebliches Manko besteht jedoch darin, dass das Training dieser Objekterkennungsmodelle, in aller Regel mit einer zugrundeliegenden AbhĂ€ngig von umfangreichen DatensĂ€tzen und der VerfĂŒgbarkeit großer Datenmengen einhergeht. Dies ist insbesondere dann problematisch, wenn der konkrete Einsatzort des Roboters und die Umgebung, einschließlich der darin befindlichen Objekte, nicht im Voraus bekannt sind. Die breite und stĂ€ndig wachsende Palette von Objekten macht es dabei praktisch unmöglich, das gesamte Spektrum an existierenden Objekten allein mit bereits zuvor erstellten DatensĂ€tzen vollstĂ€ndig abzudecken. Das Ziel dieser Dissertation war es, einem Roboter unbekannte Objekte mit Hilfe von Human-Robot Interaction (HRI) beizubringen, um ihn von seiner AbhĂ€ngigkeit von Daten sowie den EinschrĂ€nkungen durch vordefinierte Szenarien zu befreien. Die Synergie von Eye Tracking und Augmented Reality (AR) ermöglichte es dem als Lehrer fungierenden Menschen, mit dem Roboter zu kommunizieren und ihn mittels des menschlichen Blickes auf Objekte hinzuweisen. Dieser holistische Ansatz ermöglichte die Konzeption eines multimodalen HRI-Systems, durch das der Roboter Objekte identifizieren und dreidimensional segmentieren konnte, obwohl sie ihm zu diesem Zeitpunkt noch unbekannt waren, um sie anschließend aus unterschiedlichen Blickwinkeln eigenstĂ€ndig zu inspizieren. Anhand der Klasseninformationen, die ihm der Mensch mitteilte, war der Roboter daraufhin in der Lage, die entsprechenden Objekte zu erlernen und spĂ€ter wiederzuerkennen. Mit dem Wissen, das dem Roboter durch diesen auf HRI basierenden Lehrvorgang beigebracht worden war, war dessen FĂ€higkeit Objekte zu erkennen vergleichbar mit den FĂ€higkeiten modernster Objektdetektoren, die auf umfangreichen DatensĂ€tzen trainiert worden waren. Dabei war der Roboter jedoch nicht auf vordefinierte Klassen beschrĂ€nkt, was seine Vielseitigkeit und AnpassungsfĂ€higkeit unter Beweis stellte. Die im Rahmen dieser Dissertation durchgefĂŒhrte Forschung leistete bedeutende BeitrĂ€ge an der Schnittstelle von Machine Learning (ML), AR, Eye Tracking und Robotik. Diese Erkenntnisse tragen nicht nur zum besseren VerstĂ€ndnis der genannten Felder bei, sondern ebnen auch den Weg fĂŒr weitere interdisziplinĂ€re Forschung. Die in dieser Dissertation enthalten wissenschaftlichen Artikel wurden auf hochrangigen Konferenzen in den Bereichen Robotik, Eye Tracking und HRI veröffentlicht.Robots are becoming increasingly popular in a wide range of environments due to their exceptional work capacity, precision, efficiency, and scalability. This development has been further encouraged by advances in Artificial Intelligence (AI), particularly Machine Learning (ML). By employing sophisticated neural networks, robots are given the ability to detect and interact with objects in their vicinity. However, a significant drawback arises from the underlying dependency on extensive datasets and the availability of substantial amounts of training data for these object detection models. This issue becomes particularly problematic when the specific deployment location of the robot and the surroundings, including the objects within it, are not known in advance. The vast and ever-expanding array of objects makes it virtually impossible to comprehensively cover the entire spectrum of existing objects using preexisting datasets alone. The goal of this dissertation was to teach a robot unknown objects in the context of Human-Robot Interaction (HRI) in order to liberate it from its data dependency, unleashing it from predefined scenarios. In this context, the combination of eye tracking and Augmented Reality (AR) created a powerful synergy that empowered the human teacher to seamlessly communicate with the robot and effortlessly point out objects by means of human gaze. This holistic approach led to the development of a multimodal HRI system that enabled the robot to identify and visually segment the Objects of Interest (OOIs) in three-dimensional space, even though they were initially unknown to it, and then examine them autonomously from different angles. Through the class information provided by the human, the robot was able to learn the objects and redetect them at a later stage. Due to the knowledge gained from this HRI based teaching process, the robot’s object detection capabilities exhibited comparable performance to state-of-the-art object detectors trained on extensive datasets, without being restricted to predefined classes, showcasing its versatility and adaptability. The research conducted within the scope of this dissertation made significant contributions at the intersection of ML, AR, eye tracking, and robotics. These findings not only enhance the understanding of these fields, but also pave the way for further interdisciplinary research. The scientific articles included in this dissertation have been published at high-impact conferences in the fields of robotics, eye tracking, and HRI

    Configuration of skilled tasks for execution in multipurpose and collaborative service robots

    Get PDF
    Several highly versatile mobile robots have been introduced during the last ten years. Some of these robots are working among people in exhibitions and other public places, such as museums and shopping centers. Unlike industrial robots, which are typically found only in manufacturing environments, service robots can be found in a variety of places, ranging from homes to offices, and from hospitals to restaurants. Developing mobile robots working co-operatively with humans raises not only interaction problems but problems in getting tasks accomplished. In an unstructured and dynamic environment this is not readily achievable because of the high degree of complexity of perception and motion of the robots. Such tasks require high-level perception and locomotion systems, not to mention control systems for all levels of task control. The lowest levels are controlling the motors and sensors of the robots and the highest are sophisticated task planners for complex and useful tasks. Human-friendly communication can be seen as an important factor in getting robots into our homes. In this work a new task configuration concept is proposed for multipurpose service robots. The concept gives guidelines for a software architecture and task managing system. Task configuration process presents a new method which makes it easier to configure a new task for a robot. The idea is the same as when a person tells another how a task should be performed. Novel method for executing tasks with service robots is also presented. Interpretive execution, keeping the focus on only one micro task at a time, makes it possible to modify plans during their execution. Multimodal interaction is important feature to provide collaboration between humans and robots. Multimodal interaction reduces the workload of the user by administering task configuration and execution. A novel solution for using multimodal human-robot interaction (HRI) as a part of the task description is presented. This thesis is a case study reporting the results when developing a task managing (from configuring to execution) platform for multipurpose service robots and studying its performance and use with several test cases. The platform that was developed has been implemented with the WorkPartner multipurpose service robot. The structure and operation of the platform have proved to be useful and several tasks have been carried out successfully

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Human-Machine Interfaces for Service Robotics

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    Multimodal Computational Attention for Scene Understanding

    Get PDF
    Robotic systems have limited computational capacities. Hence, computational attention models are important to focus on specific stimuli and allow for complex cognitive processing. For this purpose, we developed auditory and visual attention models that enable robotic platforms to efficiently explore and analyze natural scenes. To allow for attention guidance in human-robot interaction, we use machine learning to integrate the influence of verbal and non-verbal social signals into our models
    • 

    corecore