126 research outputs found

    An Original Approach for a Better Remote Control of an Assistive Robot

    Get PDF
    Many researches have been done in the field of assistive robotics in the last few years. The first application field was helping with the disabled people\\u27s assistance. Different works have been performed on robotic arms in three kinds of situations. In the first case, static arm, the arm was principally dedicated to office tasks like telephone, fax... Several autonomous modes exist which need to know the precise position of objects. In the second configuration, the arm is mounted on a wheelchair. It follows the person who can employ it in more use cases. But if the person must stay in her/his bed, the arm is no more useful. In a third configuration, the arm is mounted on a separate platform. This configuration allows the largest number of use cases but also poses more difficulties for piloting the robot. The second application field of assistive robotics deals with the assistance at home of people losing their autonomy, for example a person with cognitive impairment. In this case, the assistance deals with two main points: security and cognitive stimulation. In order to ensure the safety of the person at home, different kinds of sensors can be used to detect alarming situations (falls, low cardiac pulse rate...). For assisting a distant operator in alarm detection, the idea is to give him the possibility to have complementary information from a mobile robot about the person\\u27s activity at home and to be in contact with the person. Cognitive stimulation is one of the therapeutic means used to maintain as long as possible the maximum of the cognitive capacities of the person. In this case, the robot can be used to bring to the person cognitive stimulation exercises and stimulate the person to perform them. To perform these tasks, it is very difficult to have a totally autonomous robot. In the case of disabled people assistance, it is even not the will of the persons who want to act by themselves. The idea is to develop a semi-autonomous robot that a remote operator can manually pilot with some driving assistances. This is a realistic and somehow desired solution. To achieve that, several scientific problems have to be studied. The first one is human-machine-cooperation. How a remote human operator can control a robot to perform a desired task? One of the key points is to permit the user to understand clearly the way the robot works. Our original approach is to analyse this understanding through appropriation concept introduced by Piaget in 1936. As the robot must have capacities of perceptio

    A Novel Predictor Based Framework to Improve Mobility of High Speed Teleoperated Unmanned Ground Vehicles

    Full text link
    Teleoperated Unmanned Ground Vehicles (UGVs) have been widely used in applications when driver safety, mission eciency or mission cost is a major concern. One major challenge with teleoperating a UGV is that communication delays can significantly affect the mobility performance of the vehicle and make teleoperated driving tasks very challenging especially at high speeds. In this dissertation, a predictor based framework with predictors in a new form and a blended architecture are developed to compensate effects of delays through signal prediction, thereby improving vehicle mobility performance. The novelty of the framework is that minimal information about the governing equations of the system is required to compensate delays and, thus, the prediction is robust to modeling errors. This dissertation first investigates a model-free solution and develops a predictor that does not require information about the vehicle dynamics or human operators' motion for prediction. Compared to the existing model-free methods, neither assumptions about the particular way the vehicle moves, nor knowledge about the noise characteristics that drive the existing predictive filters are needed. Its stability and performance are studied and a predictor design procedure is presented. Secondly, a blended architecture is developed to blend the outputs of the model-free predictor with those of a steering feedforward loop that relies on minimal information about vehicle lateral response. Better prediction accuracy is observed based on open-loop virtual testing with the blended architecture compared to using either the model-free predictors or the model-based feedforward loop alone. The mobility performance of teleoperated vehicles with delays and the predictor based framework are evaluated in this dissertation with human-in-the-loop experiments using both simulated and physical vehicles in teleoperation mode. Predictor based framework is shown to provide a statistically significant improvement in vehicle mobility and drivability in the experiments performed.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146026/1/zhengys_1.pd

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax
    corecore