14 research outputs found

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    Finger Cue for Mobile Robot Motion Control

    Get PDF
    The current technology enables automation using a robot to help or substitute humans in industry and domestic applications. This robot invasion to human life emerges a new requirement to set a method of communication between a human and a robot. One of the oldest languages is finger gesture, and this is easy to be applied method by implementing image detection that connected to the actuators of the robot to respond to human orders. This paper presents a method to navigate robots based on human fingers cue, including "Forward," "Backward," "Turn right," "Turn left," and "Stop" to generate the forward, backward, turn right, turn left, and stop motion. The finger detection is facilitated by a camera module (NFR2401L) with the image plane of 640 x 480 and 30 fps speed. The detection in coordinates x <43 and y <100, robot moves forward, in x <29 and y <100-coordinates , robot turns left, and in x <19 and y <100-coordinates , robot turns right. The experiment was conducted to show the effectiveness of the proposed method, and to some extent robot can follow human cues to navigate in its assigned location

    Combination of Flex Sensor and Electromyography for Hybrid Control Robot

    Get PDF
    The alternative control methods of robot are very important to solved problems for people with special needs. In this research, a robot arm from the elbow to hand is designed based on human right arm. This robot robot is controlled by human left arm. The positions of flex sensors are studied to recognize the flexion-extension elbow, supination-pronation forearm, flexion-extension wrist and radial-ulnar wrist.The hand of robot has two function grasping and realeasing object. This robot has four joints and six flex sensors are attached to human left arm. Electromyography signals from face muscle contraction are used to classify grasping and releasing hand. The results show that the flex sensor accuracy is 3.54° with standard error is approximately 0.040 V. Seven operators completely tasks to take and release objects at three different locations: perpendicular to the robot, left-front and right-front of the robot. The average times to finish each task are 15.7 ssecond, 17.6 second and 17.1 second. This robot control system works in a real time function. This control method can substitute the right hand function to do taking and releasing object tasks

    Context change and triggers for human intention recognition

    Get PDF
    In human-robot interaction, understanding human intention is important to smooth interaction between humans and robots. Proactive human-robot interactions are the trend. They rely on recognising human intentions to complete tasks. The reasoning is accomplished based on the current human state, environment and context, and human intention recognition and prediction. Many factors may affect human intention, including clues which are difficult to recognise directly from the action but may be perceived from the change in the environment or context. The changes that affect human intention are the triggers and serve as strong evidence for identifying human intention. Therefore, detecting such changes and identifying such triggers are the promising approach to assist in human intention recognition. This paper discusses the current state of art in human intention recognition in human-computer interaction and illustrates the importance of context change and triggers for human intention recognition in a variety of examples

    Automatic Learning Improves Human-Robot Interaction in Productive Environments: A Review

    Get PDF
    In the creation of new industries, products and services -- all of which are advances of the Fourth Industrial Revolution -- the human-robot interaction that includes automatic learning and computer vision are elements to consider since they promote collaborative environments between people and robots. The use of machine learning and computer vision provides the tools needed to increase productivity and minimizes delivery reaction times by assisting in the optimization of complex production planning processes. This review of the state of the art presents the main trends that seek to improve human-robot interaction in productive environments, and identifies challenges in research as well as in industrial - technological development in this topic. In addition, this review offers a proposal on the needs of use of artificial intelligence in all processes of industry 4.0 as a crucial linking element among humans, robots, intelligent and traditional machines; as well as a mechanism for quality control and occupational safety.This work has been funded by the Spanish Government [TIN2016-76515-R] grant for the COMBAHO project, supported with Feder funds

    Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature

    Get PDF
    Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applicationsBeca Conacyt Doctorado No de CVU: 64683

    Learning from human-robot interaction

    Get PDF
    En los últimos años cada vez es más frecuente ver robots en los hogares. La robótica está cada vez más presente en muchos aspectos de nuestras vidas diarias, en aparatos de asistencia doméstica, coches autónomos o asistentes personales. La interacción entre estos robots asistentes y los usuarios es uno de los aspectos clave en la robótica de servicio. Esta interacción necesita ser cómoda e intuitiva para que sea efectiva su utilización. Estas interacciones con los usuarios son necesarias para que el robot aprenda y actualice de manera natural tanto su modelo del mundo como sus capacidades. Dentro de los sistemas roboticos de servicio, hay muchos componentes que son necesarios para su buen funcionamiento. Esta tesis esta centrada en el sistema de percepción visual de dichos sistemas.Para los humanos la percepción visual es uno de los componentes más esenciales, permitiendo tareas como reconocimiento de objetos u otras personas, o estimación de información 3D. Los grandes logros obtenidos en los últimos años en tareas de reconocimiento automático utilizan los enfoques basados en aprendizaje automático, en particular técnicas de deep learning. La mayoría de estos trabajos actuales se centran en modelos entrenados 'a priori' en un conjunto de datos muy grandes. Sin embargo, estos modelos, aunque entrenados en una gran cantidad de datos, no pueden, en general, hacer frente a los retos que aparecen al tratar con datos reales en entornos domésticos. Por ejemplo, es frecuente que se de el caso de tener nuevos objetos que no existían durante el entrenamiento de los modelos. Otro reto viene de la dispersión de los objetos, teniendo objetos que aparecen muy raramente y por lo tanto habia muy pocos, o ningún, ejemplos en los datos de entenamiento disponibles al crear el modelo.Esta tesis se ha desarrollado dentro del contexto del proyecto IGLU (Interactive Grounded Language Understanding). Dentro del proyecto y sus objetivos, el objetivo principal de esta Tesis doctoral es investigar métodos novedosos para que un robot aprenda de manera incremental mediante la interacción multimodal con el usuario.Desarrollando dicho objetivo principal, los principales trabajos desarrollados durante esta tesis han sido:-Crear un benchmark más adecuado para las tareas de aprendizaje mediante la interacción natural de usuario y robot. Por ejemplo, la mayoría de los datasets para la tarea de reconocimiento de objetos se centra en fotos de diferentes escenarios con múltiples clases por foto. Es necesario un dataset que combine interacción usuario robot con aprendizaje de objetos.-Mejorar sistemas existentes de aprendizaje de objetos y adecuarlos para aprendizaje desde la interacción multimodal humana. Los trabajos de detección de objetos se focalizan en detectar todos los objetos aprendidos en una imagen. Nuestro objetivo es usar la interacción para encontrar el objeto de referencia y aprenderlo incrementalmente.-Desarrollar métodos de aprendizaje incremental que se puedan utilizar en escenarios incrementales, p.e., la aparición de una nueva clase de objeto o cambios a lo largo del tiempo dentro de una clase objetos. Nuestro objetivo es diseñar un sistema que pueda aprender clases desde cero y que pueda actualizar los datos cuando estos aparecen.-Crear un completo prototipo para el aprendizaje incremental y multimodal usando la interacción humana-robot. Se necesita realizar la integración de los distintos métodos desarrollados como parte de los otros objetivos y evaluarlo.<br /
    corecore