6 research outputs found

    Model and implementation of body movement recognition using Support Vector Machines and Finite State Machines with cartesian coordinates input for gesture-based interaction

    Get PDF
    The growth in the use of gesture-based interaction in video games has highlighted the potential for the use of such interaction method for a wide range of applications. This paper presents the implementation of an enhanced model for gesture recognition as input method for software applications. The model uses Support Vector Machines (SVM) and Finite State Machines (FSM) and the implementation was based on a Kinect R device. The model uses data input based on Cartesian coordinates. The use of Cartesian coordinates enables more flexibility to generalise the use of the model to different applications, when compared to related work encountered in the literature based on accelerometer devices for data input. The results showed that the use of SVM and FSM with Cartesian coordinates as input for gesture-based interaction is very promising. The success rate in gesture recognition was 98%, from a training corpus of 9 sets obtained by recording real users’ gestures. A proof-of-concept implementation of the gesture recognition interaction was performed using the application Google Earth(R). A preliminary acceptance evaluation with users indicated that the interaction with the system via the implementation reported was satisfactory.Facultad de Informátic

    Mobile Robot Navigation for Person Following in Indoor Environments

    Get PDF
    Service robotics is a rapidly growing area of interest in robotics research. Service robots inhabit human-populated environments and carry out specific tasks. The goal of this dissertation is to develop a service robot capable of following a human leader around populated indoor environments. A classification system for person followers is proposed such that it clearly defines the expected interaction between the leader and the robotic follower. In populated environments, the robot needs to be able to detect and identify its leader and track the leader through occlusions, a common characteristic of populated spaces. An appearance-based person descriptor, which augments the Kinect skeletal tracker, is developed and its performance in detecting and overcoming short and long-term leader occlusions is demonstrated. While following its leader, the robot has to ensure that it does not collide with stationary and moving obstacles, including other humans, in the environment. This requirement necessitates the use of a systematic navigation algorithm. A modified version of navigation function path planning, called the predictive fields path planner, is developed. This path planner models the motion of obstacles, uses a simplified representation of practical workspaces, and generates bounded, stable control inputs which guide the robot to its desired position without collisions with obstacles. The predictive fields path planner is experimentally verified on a non-person follower system and then integrated into the robot navigation module of the person follower system. To navigate the robot, it is necessary to localize it within its environment. A mapping approach based on depth data from the Kinect RGB-D sensor is used in generating a local map of the environment. The map is generated by combining inter-frame rotation and translation estimates based on scan generation and dead reckoning respectively. Thus, a complete mobile robot navigation system for person following in indoor environments is presented

    Secure indoor navigation and operation of mobile robots

    Get PDF
    In future work environments, robots will navigate and work side by side to humans. This raises big challenges related to the safety of these robots. In this Dissertation, three tasks have been realized: 1) implementing a localization and navigation system based on StarGazer sensor and Kalman filter; 2) realizing a human-robot interaction system using Kinect sensor and BPNN and SVM models to define the gestures and 3) a new collision avoidance system is realized. The system works on generating the collision-free paths based on the interaction between the human and the robot.In zukĂĽnftigen Arbeitsumgebungen werden Roboter navigieren nebeneinander an Menschen. Das wirft Herausforderungen im Zusammenhang mit der Sicherheit dieser Roboter auf. In dieser Dissertation drei Aufgaben realisiert: 1. Implementierung eines Lokalisierungs und Navigationssystem basierend auf Kalman Filter: 2. Realisierung eines Mensch-Roboter-Interaktionssystem mit Kinect und AI zur Definition der Gesten und 3. ein neues Kollisionsvermeidungssystem wird realisiert. Das System arbeitet an der Erzeugung der kollisionsfreien Pfade, die auf der Wechselwirkung zwischen dem Menschen und dem Roboter basieren

    DESIGN AND EVALUATION OF A NONVERBAL COMMUNICATION PLATFORM BETWEEN ASSISTIVE ROBOTS AND THEIR USERS

    Get PDF
    Assistive robotics will become integral to the everyday lives of a human population that is increasingly mobile, older, urban-centric and networked. The overwhelming demands on healthcare delivery alone will compel the adoption of assistive robotics. How will we communicate with such robots, and how will they communicate with us? This research makes the case for a relatively \u27artificial\u27 mode of nonverbal human-robot communication that is non-disruptive, non-competitive, and non-invasive human-robot communication that we envision will be willingly invited into our private and working lives over time. This research proposes a non-verbal communication (NVC) platform be conveyed by familiar lights and sounds, and elaborated here are experiments with our NVC platform in a rehabilitation hospital. This NVC is embedded into the Assistive Robotic Table (ART), developed within our lab, that supports the well-being of an expanding population of older adults and those with limited mobility. The broader aim of this research is to afford people robot-assistants that exist and interact with them in the recesses, rather than in the foreground, of their intimate and social lives. With support from our larger research team, I designed and evaluated several alternative modes of nonverbal robot communication with the objective of establishing a nonverbal, human-robot communication loop that evolves with users and can be modified by users. The study was conducted with 10-13 clinicians -- doctors and occupational, physical, and speech therapists -- at a local rehabilitation hospital through three iterative design and evaluation phases and a final usability study session. For our test case at a rehabilitation hospital, medical staff iteratively refined our NVC platform, stated a willingness to use our platform, and declared NVC as a desirable research path. In addition, these clinicians provided the requirements for human-robot interaction (HRI) in clinical settings, suggesting great promise for our mode of human-robot communication for this and other applications and environments involving intimate HRI

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    Gesture-Based Robot Path Shaping

    Get PDF
    For many individuals, aging is frequently associated with diminished mobility and dexterity. Such decreases may be accompanied by a loss of independence, increased burden to caregivers, or institutionalization. It is foreseen that the ability to retain independence and quality of life as one ages will increasingly depend on environmental sensing and robotics which facilitate aging in place. The development of ubiquitous sensing strategies in the home underpins the promise of adaptive services, assistive robotics, and architectural design which would support a person\u27s ability to live independently as they age. Instrumentation (sensors and processing) which is capable of recognizing the actions and behavioral patterns of an individual is key to the effective component design in these areas. Recognition of user activity and the inference of user intention may be used to inform the action plans of support systems and service robotics within the environment. Automated activity recognition involves detection of events in a sensor data stream, conversion to a compact format, and classification as one of a known set of actions. Once classified, an action may be used to elicit a specific response from those systems designed to provide support to the user. It is this response that is the ultimate use of recognized activity. Hence, the activity may be considered as a command to the system. Extending this concept, a set of distinct activities in the form of hand and arm gestures may form the basis of a command interface for human-robot interaction. A gesture-based interface of this type promises an intuitive method for accessing computing and other assistive resources so as to promote rapid adoption by elderly, impaired, or otherwise unskilled users. This thesis includes a thorough survey of relevant work in the area of machine learning for activity and gesture recognition. Previous approaches are compared for their relative benefits and limitations. A novel approach is presented which utilizes user-generated feedback to rate the desirability of a robotic response to gesture. Poorly rated responses are altered so as to elicit improved ratings on subsequent observations. In this way, responses are honed toward increasing effectiveness. A clustering method based on the Growing Neural Gas (GNG) algorithm is used to create a topological map of reference nodes representing input gesture types. It is shown that learning of desired responses to gesture may be accelerated by exploiting well-rewarded actions associated with reference nodes in a local neighborhood of the growing neural gas topology. Significant variation in the user\u27s performance of gestures is interpreted as a new gesture for which the system must learn a desired response. A method for allowing the system to learn new gestures while retaining past training is also proposed and shown to be effective
    corecore