3,745 research outputs found

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    Assistive technology design and development for acceptable robotics companions for ageing years

    Get PDF
    © 2013 Farshid Amirabdollahian et al., licensee Versita Sp. z o. o. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs license, which means that the text may be used for non-commercial purposes, provided credit is given to the author.A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developmentsPeer reviewe

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Analysis and Observations from the First Amazon Picking Challenge

    Full text link
    This paper presents a overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team's background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team's success in the competition, and discuss observations and lessons learned based on survey results and the authors' personal experiences during the challenge

    Proactive behavior of an autonomous mobile robot for human-assisted learning

    Get PDF
    Presentado al 22nd IEEE International Symposium on Robot and Human Interactive Communication celebrado en Gyeongju (Korea) del 26 al 29 de agosto de 2013.During the last decade, there has been a growing interest in making autonomous social robots able to interact with people. However, there are still many open issues regarding the social capabilities that robots should have in order to perform these interactions more naturally. In this paper we present the results of several experiments conducted at the Barcelona Robot Lab in the campus of the “Universitat Politècnica de Catalunya” in which we have analyzed different important aspects of the interaction between a mobile robot and nontrained human volunteers. First, we have proposed different robot behaviors to approach a person and create an engagement with him/her. In order to perform this task we have provided the robot with several perception and action capabilities, such as that of detecting people, planning an approach and verbally communicating its intention to initiate a conversation. Once the initial engagement has been created, we have developed further communication skills in order to let people assist the robot and improve its face recognition system. After this assisted and online learning stage, the robot becomes able to detect people under severe changing conditions, which, in turn enhances the number and the manner that subsequent human-robot interactions are performed.Work supported by the Spanish Ministry of Science and Innovation under projects RobTaskCoop(DPI2010-17112) and the EU ARCAS Project FP7-ICT-2011-287617.Peer Reviewe

    A novel educational tool for teaching ocular ultrasound

    Get PDF
    Ocular ultrasound is now in increasing demand in routine ophthalmic clinical practice not only because it is noninvasive but also because of ever-advancing technology providing higher resolution imaging. It is however a difficult branch of ophthalmic investigations to grasp, as it requires a high skill level to interface with the technology and provide accurate interpretation of images for ophthalmic diagnosis and management. It is even more labor intensive to teach ocular ultrasound to another fellow clinician. One of the fundamental skills that proved difficult to learn and teach is the need for the examiner to “mentally convert” 2-dimensional B-scan images into 3-dimensional (3D) interpretations. An additional challenge is the requirement to carry out this task in real time. We have developed a novel approach to teach ocular ultrasound by using a novel 3D ocular model. A 3D virtual model is built using widely available, open source, software. The model is then used to generate movie clips simulating different movements and orientations of the scanner head. Using Blender, Quicktime motion clips are choreographed and collated into interactive quizzes and other pertinent pedagogical media. The process involves scripting motion vectors, rotation, and tracking of both the virtual stereo camera and the model. The resulting sequence is then rendered for twinned right- and left-eye views. Finally, the twinned views are synchronized and combined in a format compatible with the stereo projection apparatus. This new model will help the student with spatial awareness and allow for assimilation of this awareness into clinical practice. It will also help with grasping the nomenclature used in ocular ultrasound as well as helping with localization of lesions and obtaining the best possible images for echographic diagnosis, accurate measurements, and reporting

    Кибербезопасность в образовательных сетях

    Get PDF
    The paper discusses the possible impact of digital space on a human, as well as human-related directions in cyber-security analysis in the education: levels of cyber-security, social engineering role in cyber-security of education, “cognitive vaccination”. “A Human” is considered in general meaning, mainly as a learner. The analysis is provided on the basis of experience of hybrid war in Ukraine that have demonstrated the change of the target of military operations from military personnel and critical infrastructure to a human in general. Young people are the vulnerable group that can be the main goal of cognitive operations in long-term perspective, and they are the weakest link of the System.У статті обговорюється можливий вплив цифрового простору на людину, а також пов'язані з людиною напрямки кібербезпеки в освіті: рівні кібербезпеки, роль соціального інжинірингу в кібербезпеці освіти, «когнітивна вакцинація». «Людина» розглядається в загальному значенні, головним чином як та, що навчається. Аналіз надається на основі досвіду гібридної війни в Україні, яка продемонструвала зміну цілей військових операцій з військовослужбовців та критичної інфраструктури на людину загалом. Молодь - це вразлива група, яка може бути основною метою таких операцій в довгостроковій перспективі, і вони є найслабшою ланкою системи.В документе обсуждается возможное влияние цифрового пространства на человека, а также связанные с ним направления в анализе кибербезопасности в образовании: уровни кибербезопасности, роль социальной инженерии в кибербезопасности образования, «когнитивная вакцинация». «Человек» рассматривается в общем смысле, в основном как ученик. Анализ представлен на основе опыта гибридной войны в Украине, которая продемонстрировала изменение цели военных действий с военного персонала и критической инфраструктуры на человека в целом. Молодые люди являются уязвимой группой, которая может быть главной целью когнитивных операций в долгосрочной перспективе, и они являются самым слабым звеном Систем
    corecore