60 research outputs found

    Design and Development of a Robot Guided Rehabilitation Scheme for Upper Extremity Rehabilitation

    Get PDF
    To rehabilitate individuals with impaired upper-limb function, we have designed and developed a robot guided rehabilitation scheme. A humanoid robot, NAO was used for this purpose. NAO has 25 degrees of freedom. With its sensors and actuators, it can walk forward and backward, can sit down and stand up, can wave his hand, can speak to the audience, can feel the touch sensation, and can recognize the person he is meeting. All these qualities have made NAO a perfect coach to guide the subjects to perform rehabilitation exercises. To demonstrate rehabilitation exercises with NAO, a library of recommended rehabilitation exercises involving shoulder (i.e., abduction/adduction, vertical flexion/extension, and internal/external rotation), and elbow (i.e., flexion/extension) joint movements was formed in Choregraphe (graphical programming interface). In experiments, NAO was maneuvered to instruct and demonstrate the exercises from the NRL. A complex ‘touch and play’ game was also developed where NAO plays with the subject that represents a multi-joint movement’s exercise. To develop the proposed tele-rehabilitation scheme, kinematic model of human upper-extremity was developed based modified Denavit-Hartenberg notations. A complete geometric solution was developed to find a unique inverse kinematic solution of human upper-extremity from the Kinect data. In tele-rehabilitation scheme, a therapist can remotely tele-operate the NAO in real-time to instruct and demonstrate subjects different arm movement exercises. Kinect sensor was used in this scheme to get tele-operator’s kinematics data. Experiments results reveals that NAO can be tele-operated successfully to instruct and demonstrate subjects to perform different arm movement exercises. A control algorithm was developed in MATLAB for the proposed robot guided supervised rehabilitation scheme. Experimental results show that the NAO and Kinect sensor can effectively be used to supervise and guide the subjects in performing active rehabilitation exercises for shoulder and elbow joint movements

    Robot mediated communication: Enhancing tele-presence using an avatar

    Get PDF
    In the past few years there has been a lot of development in the field of tele-presence. These developments have caused tele-presence technologies to become easily accessible and also for the experience to be enhanced. Since tele-presence is not only used for tele-presence assisted group meetings but also in some forms of Computer Supported Cooperative Work (CSCW), these activities have also been facilitated. One of the lingering issues has to do with how to properly transmit presence of non-co-located members to the rest of the group. Using current commercially available tele-presence technology it is possible to exhibit a limited level of social presence but no physical presence. In order to cater for this lack of presence a system is implemented here using tele-operated robots as avatars for remote team members and had its efficacy tested. This testing includes both the level of presence that can be exhibited by robot avatars but also how the efficacy of these robots for this task changes depending on the morphology of the robot. Using different types of robots, a humanoid robot and an industrial robot arm, as tele-presence avatars, it is found that the humanoid robot using an appropriate control system is better at exhibiting a social presence. Further, when compared to a voice only scenario, both robots proved significantly better than with only voice in terms of both cooperative task solving and social presence. These results indicate that using an appropriate control system, a humanoid robot can be better than an industrial robot in these types of tasks and the validity of aiming for a humanoid design behaving in a human-like way in order to emulate social interactions that are closer to human norms. This has implications for the design of autonomous socially interactive robot systems

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Homecare Robotic Systems for Healthcare 4.0: Visions and Enabling Technologies

    Get PDF
    Powered by the technologies that have originated from manufacturing, the fourth revolution of healthcare technologies is happening (Healthcare 4.0). As an example of such revolution, new generation homecare robotic systems (HRS) based on the cyber-physical systems (CPS) with higher speed and more intelligent execution are emerging. In this article, the new visions and features of the CPS-based HRS are proposed. The latest progress in related enabling technologies is reviewed, including artificial intelligence, sensing fundamentals, materials and machines, cloud computing and communication, as well as motion capture and mapping. Finally, the future perspectives of the CPS-based HRS and the technical challenges faced in each technical area are discussed

    Implementation of NAO robot maze navigation based on computer vision and collaborative learning.

    Get PDF
    Maze navigation using one or more robots has become a recurring challenge in scientific literature and real life practice, with fleets having to find faster and better ways to navigate environments such as a travel hub, airports, or for evacuation of disaster zones. Many methodologies have been explored to solve this issue, including the implementation of a variety of sensors and other signal receiving systems. Most interestingly, camera-based techniques have become more popular in this kind of scenarios, given their robustness and scalability. In this paper, we implement an end-to-end strategy to address this scenario, allowing a robot to solve a maze in an autonomous way, by using computer vision and path planning. In addition, this robot shares the generated knowledge to another by means of communication protocols, having to adapt its mechanical characteristics to be capable of solving the same challenge. The paper presents experimental validation of the four components of this solution, namely camera calibration, maze mapping, path planning and robot communication. Finally, we showcase some initial experimentation in a pair of robots with different mechanical characteristics. Further implementations of this work include communicating the robots for other tasks, such as teaching assistance, remote classes, and other innovations in higher education

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces
    • …
    corecore