90 research outputs found

    Gemini Telepresence Robot System Design: A Low-Cost Solution for Manipulation and Enhanced Perception of Telepresence Robots

    Get PDF
    Current telepresence robots are costly and only allow the operator to see the environment on a 2D screen and move around on a wheelbase. Thus, these telepresence devices are severely limited because of the high barrier of entry, and the operator is unable to manipulate objects or easily perceive the world in 3D. Therefore, to address these gaps in capabilities, Gemini, an open-source telepresence humanoid robot and interface station, was designed to grant the operator the ability to manipulate objects, expand the human interface by putting the user in the 3D world with the use of a virtual reality (VR) headset, and be low-cost. The simplistic, low-cost, and intuitive controls of Gemini promote early adoption by businesses and medical personnel to grant increased telepresence needs. In addition, this platform can be utilized by robotics enthusiasts and university researchers studying humanoid robotics or human-robot interaction. This paper presents an overview of the Gemini robot’s mechanical, electrical, and programmatic systems. Upon completion of this study, it was found that Gemini was able to grant the ability to manipulate objects, increase user perception with intuitive controls, in addition to costing approximately 30% less than commercial telepresence robots. Furthermore, the paper is concluded with remarks on future iterations of the project

    A Generative Human-Robot Motion Retargeting Approach Using a Single RGBD Sensor

    Get PDF
    The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. Typically in previous approaches, the human poses are precomputed from a human pose tracking system, after which the explicit joint mapping strategies are specified to apply the estimated poses to a target robot. However, there is not any generic mapping strategy that we can use to map the human joint to robots with different kinds of configurations. In this paper, we present a novel motion retargeting approach that combines the human pose estimation and the motion retargeting procedure in a unified generative framework without relying on any explicit mapping. First, a 3D parametric human-robot (HUMROB) model is proposed which has the specific joint and stability configurations as the target robot while its shape conforms the source human subject. The robot configurations, including its skeleton proportions, joint limitations, and DoFs are enforced in the HUMROB model and get preserved during the tracking procedure. Using a single RGBD camera to monitor human pose, we use the raw RGB and depth sequence as input. The HUMROB model is deformed to fit the input point cloud, from which the joint angle of the model is calculated and applied to the target robots for retargeting. In this way, instead of fitted individually for each joint, we will get the joint angle of the robot fitted globally so that the surface of the deformed model is as consistent as possible to the input point cloud. In the end, no explicit or pre-defined joint mapping strategies are needed. To demonstrate its effectiveness for human-robot motion retargeting, the approach is tested under both simulations and on real robots which have a quite different skeleton configurations and joint degree of freedoms (DoFs) as compared with the source human subjects

    Accessibility requirements for human-robot interaction for socially assistive robots

    Get PDF
    Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Vázquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal

    Architecture de contrôle d'un robot de téléprésence et d'assistance aux soins à domicile

    Get PDF
    La population vieillissante provoque une croissance des coûts pour les soins hospitaliers. Pour éviter que ces coûts deviennent trop importants, des robots de téléprésence et d’assistance aux soins et aux activités quotidiennes sont envisageables afin de maintenir l’autonomie des personnes âgées à leur domicile. Cependant, les robots actuels possèdent individuellement des fonctionnalités intéressantes, mais il serait bénéfique de pouvoir réunir leurs capacités. Une telle intégration est possible par l’utilisation d’une architecture décisionnelle permettant de jumeler des capacités de navigation, de suivi de la voix et d’acquisition d’informations afin d’assister l’opérateur à distance, voir même s’y substituer. Pour ce projet, l’architecture de contrôle HBBA (Hybrid Behavior-Based Architecture) sert de pilier pour unifier les bibliothèques requises, RTAB-Map (Real-Time Appearance-Based Mapping) et ODAS (Open embeddeD Audition System), pour réaliser cette intégration. RTAB-Map est une bibliothèque permettant la localisation et la cartographie simultanée selon différentes configurations de capteurs tout en respectant les contraintes de traitement en ligne. ODAS est une bibliothèque permettant la localisation, le suivi et la séparation de sources sonores en milieux réels. Les objectifs sont d’évaluer ces capacités en environnement réel en déployant la plateforme robotique dans différents domiciles, et d’évaluer le potentiel d’une telle intégration en réalisant un scénario autonome d’assistance à la prise de mesure de signes vitaux. La plateforme robotique Beam+ est utilisée pour réaliser cette intégration. La plateforme est bonifiée par l’ajout d’une caméra RBG-D, d’une matrice de huit microphones, d’un ordinateur et de batteries supplémentaires. L’implémentation résultante, nommée SAM, a été évaluée dans 10 domiciles pour caractériser la navigation et le suivi de conversation. Les résultats de la navigation suggèrent que les capacités de navigation fonctionnent selon certaines contraintes propres au positionement des capteurs et des conditions environnementales, impliquant la nécessité d’intervention de l’opérateur pour compenser. La modalité de suivi de la voix fonctionne bien dans des environnements calmes, mais des améliorations sont requises en milieu bruyant. Incidemment, la réalisation d’un scénario d’assistance complètement autonome est fonction des performances de la combinaison de ces fonctionnalités, ce qui rend difficile d’envisager le retrait complet d’un opérateur dans la boucle de décision. L’intégration des modalités avec HBBA s’avère possible et concluante, et ouvre la porte à la réutilisabilité de l’implémentation sur d’autres plateformes robotiques qui pourraient venir compenser face aux lacunes observées sur la mise en œuvre avec la plateforme Beam+

    An Augmented Reality Based Human-Robot Interaction Interface Using Kalman Filter Sensor Fusion

    Get PDF
    In this paper, the application of Augmented Reality (AR) for the control and adjustment of robots has been developed, with the aim of making interaction and adjustment of robots easier and more accurate from a remote location. A LeapMotion sensor based controller has been investigated to track the movement of the operator hands. The data from the controller allows gestures and the position of the hand palm’s central point to be detected and tracked. A Kinect V2 camera is able to measure the corresponding motion velocities in x, y, z directions after our investigated post-processing algorithm is fulfilled. Unreal Engine 4 is used to create an AR environment for the user to monitor the control process immersively. Kalman filtering (KF) algorithm is employed to fuse the position signals from the LeapMotion sensor with the velocity signals from the Kinect camera sensor, respectively. The fused/optimal data are sent to teleoperate a Baxter robot in real-time by User Datagram Protocol (UDP). Several experiments have been conducted to test the validation of the proposed method

    Iconic gestures for robot avatars, recognition and integration with speech

    Get PDF
    © 2016 Bremner and Leonards. Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances

    ASCCbot: An Open Mobile Robot Platform

    Get PDF
    ASCCbot, an open mobile platform built in ASCC lab, is presented in this thesis. The hardware and software design of the ASCCbot makes it a robust, extendable and duplicable robot platform which is suitable for most mobile robotics research including navigation, mapping, localization, etc. ROS is adopted as the major software framework, which not only makes ASCCbot a open-source project, but also extends its network functions so that multi-robot network applications can be easily implemented based on multiple ASCCbots. Collaborative localization is designed to test the network features of the ASCCbot. A telepresence robot is built based on the ASCCbot. A Kinect-based human gesture recognition method is implemented for intuitive human-robot interaction on it. For the telepresence robot, a GUI is also created in which basic control commands, video streaming and 2D metric map rendering are presented. Last but not least, semantic mapping through human activity recognition is proposed as a novel approach to semantic mapping. For the human activity recognition part, a power-aware wireless motion sensor is designed and evaluated. The overall semantic mapping system is explained and tested in a mock apartment. The experiment results show that the activity recognition results are reliable, and the semantic map updating process is able to create an accurate semantic map which matches the real furniture layout. To sum up, the ASCCbot is a versatile mobile robot platform with basic functions as well as feature functions implemented. Complex high-level functions can be built upon the existing functions from the ASCCbot. With its duplicability, extendability and open-source feature, the ASCCbot will be very useful for mobile robotics research.School of Electrical & Computer Engineerin

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Robot mediated communication: Enhancing tele-presence using an avatar

    Get PDF
    In the past few years there has been a lot of development in the field of tele-presence. These developments have caused tele-presence technologies to become easily accessible and also for the experience to be enhanced. Since tele-presence is not only used for tele-presence assisted group meetings but also in some forms of Computer Supported Cooperative Work (CSCW), these activities have also been facilitated. One of the lingering issues has to do with how to properly transmit presence of non-co-located members to the rest of the group. Using current commercially available tele-presence technology it is possible to exhibit a limited level of social presence but no physical presence. In order to cater for this lack of presence a system is implemented here using tele-operated robots as avatars for remote team members and had its efficacy tested. This testing includes both the level of presence that can be exhibited by robot avatars but also how the efficacy of these robots for this task changes depending on the morphology of the robot. Using different types of robots, a humanoid robot and an industrial robot arm, as tele-presence avatars, it is found that the humanoid robot using an appropriate control system is better at exhibiting a social presence. Further, when compared to a voice only scenario, both robots proved significantly better than with only voice in terms of both cooperative task solving and social presence. These results indicate that using an appropriate control system, a humanoid robot can be better than an industrial robot in these types of tasks and the validity of aiming for a humanoid design behaving in a human-like way in order to emulate social interactions that are closer to human norms. This has implications for the design of autonomous socially interactive robot systems
    corecore