1,244 research outputs found

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    Human-aware Collaborative Manipulation with Reaching Motion Prediction

    Get PDF
    This dissertations presents a possible approach to improve human-robot interaction in an industrial collaborative situation, where the human operator and a collaborative industrial robot work within a shared work-space. The approach presented in this dissertation focuses on a situation where part of the assembly process needs to be carried out by a human operator, whose assembly station is located on a work-bench, and a robot is used to pick and place products in specific locations on the operator’s work station. Because those locations can be accessed both by the robot or the human operator at any time, collisions can occur and should be avoided in order to make the process more natural for the human operator as well as to avoid the emergency stop of the collaborative robot which has to be restarted and thus decreases productivity. In order to prevent those collisions the proposed system defines key-areas in each of the locations as well as other relevant positions for the collaborative task. The system uses a Kinect Sensor and a neural network to track the user’s hand over time and Gaussian Mixture Models to make predictions regarding the possible destination key-area given the observed trajectory until that moment. If a collision is predicted the robot pauses the task being executed at the moment in order to prevent it and, once the conflict has been resolved, resumes operation.Esta dissertação apresenta uma possível aproximação para melhorar a interação humanorobot em situações industrias colaborativas, onde um operador humano e um robot industrial colaborativo trabalham num espaço partilhado. A aproximação apresentada nesta dissertação foca situações onde parte do processo de produção deve ser realizado por um operador humano cuja área de trabalho se localiza numa mesa. É utilizado um robot de forma a colocar e retirar produtos de locais especificos da mesa de trabalho do operador. Uma vez que estes locais podem ser acedidos pelo utilizador e pelo robot a qualquer momento é possivel que ocorram colisões que devem ser evitadas, de forma a tornar a interação mais natural para o humano e evitar paragens de emergencia, que requerem que o robot colaborativo seja reiniciado manualmente e, portanto, diminuem a produtividade. De forma a prevenir essas colisões, o sistema proposto define áreas-chave nos locais onde podem ocorrer colisões e em outras localisões relevantes para a tarefa colaborativa a ser executada. A solução proposta utiliza um sensor Kinect, juntamente com uma rede neuronal para seguir a mão do operador ao longo do tempo e usa Gaussian Mixture Models para fazer previsões relativas à área de destino dada a trajetoria observada até ao momento. Se for prevista uma colisão o robot interrompe a execução da tarefa programada de forma a evitar a colisão. Uma vez o conflito resolvido, o robot retoma a tarefa do ponto onde parou

    Social Activity Recognition on Continuous RGB-D Video Sequences

    Get PDF
    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data

    Robot skill learning through human demonstration and interaction

    Get PDF
    Nowadays robots are increasingly involved in more complex and less structured tasks. Therefore, it is highly desirable to develop new approaches to fast robot skill acquisition. This research is aimed to develop an overall framework for robot skill learning through human demonstration and interaction. Through low-level demonstration and interaction with humans, the robot can learn basic skills. These basic skills are treated as primitive actions. In high-level learning, the complex skills demonstrated by the human can be automatically translated into skill scripts which are executed by the robot. This dissertation summarizes my major research activities in robot skill learning. First, a framework for Programming by Demonstration (PbD) with reinforcement learning for human-robot collaborative manipulation tasks is described. With this framework, the robot can learn low level skills such as collaborating with a human to lift a table successfully and efficiently. Second, to develop a high-level skill acquisition system, we explore the use of a 3D sensor to recognize human actions. A Kinect based action recognition system is implemented which considers both object/action dependencies and the sequential constraints. Third, we extend the action recognition framework by fusing information from multimodal sensors which can recognize fine assembly actions. Fourth, a Portable Assembly Demonstration (PAD) system is built which can automatically generate skill scripts from human demonstration. The skill script includes the object type, the tool, the action used, and the assembly state. Finally, the generated skill scripts are implemented by a dual-arm robot. The proposed framework was experimentally evaluated
    • …
    corecore