36,257 research outputs found

    Continual Learing of Hand Gestures for Human Robot Interaction

    Get PDF
    Human communication is multimodal. For years, natural language processing has been studied as a form of human-machine or human-robot interaction. In recent years, computer vision techniques have been applied to the recognition of static and dynamic gestures, and progress is being made in sign language recognition too. The typical way to train a machine learning algorithm to perform a classification task is to provide training examples for all the classes that need to be identified by the model. In a real-world scenario, such as in the use of assistive robots, it is useful to learn new concepts from interaction. However, unlike biological brains, artificial neural networks suffer from catastrophic forgetting, and as a result, are not good at incrementally learning new classes. In this thesis, the HAnd Gesture Incremental Learning (HAGIL) framework is proposed as a method to incrementally learn to classify static hand gestures. We show that HAGIL is able to incrementally learn up to 36 new symbols using only 5 samples for each old symbol, achieving a final average accuracy of over 90%. In addition to that, the incremental training time is reduced to a 10% of the time required when using all data available

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    Assessing the effectiveness of multi-touch interfaces for DP operation

    Get PDF
    Navigating a vessel using dynamic positioning (DP) systems close to offshore installations is a challenge. The operator's only possibility of manipulating the system is through its interface, which can be categorized as the physical appearance of the equipment and the visualization of the system. Are there possibilities of interaction between the operator and the system that can reduce strain and cognitive load during DP operations? Can parts of the system (e.g. displays) be physically brought closer to the user to enhance the feeling of control when operating the system? Can these changes make DP operations more efficient and safe? These questions inspired this research project, which investigates the use of multi-touch and hand gestures known from consumer products to directly manipulate the visualization of a vessel in the 3D scene of a DP system. Usability methodologies and evaluation techniques that are widely used in consumer market research were used to investigate how these interaction techniques, which are new to the maritime domain, could make interaction with the DP system more efficient and transparent both during standard and safety-critical operations. After investigating which gestures felt natural to use by running user tests with a paper prototype, the gestures were implemented into a Rolls-Royce DP system and tested in a static environment. The results showed that the test participants performed significantly faster using direct gesture manipulation compared to using traditional button/menu interaction. To support the results from these tests, further tests were carried out. The purpose is to investigate how gestures are performed in a moving environment, using a motion platform to simulate rough sea conditions. The key results and lessons learned from a collection of four user experiments, together with a discussion of the choice of evaluation techniques will be discussed in this paper
    • …
    corecore