483 research outputs found

    Comparing Alternate Modes of Teleoperation for Constrained Tasks

    Get PDF
    Teleoperation of heavy machinery in industry often requires operators to be in close proximity to the plant and issue commands on a per-actuator level using joystick input devices. However, this is non-intuitive and makes achieving desired job properties a challenging task requiring operators to complete extensive and costly training. Despite this, operator fatigue is common with implications for personal safety, project timeliness, cost, and quality. While full automation is not yet achievable due to unpredictability and the dynamic nature of the environment and task, shared control paradigms allow operators to issue high-level commands in an intuitive, task-informed control space while having the robot optimize for achieving desired job properties. In this paper, we compare a number of modes of teleoperation, exploring both the number of dimensions of the control input as well as the most intuitive control spaces. Our experimental evaluations of the performance metrics were based on quantifying the difficulty of tasks based on the well known Fitts' law as well as a measure of how well constraints affecting the task performance were met. Our experiments show that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations

    Advancing automation and robotics technology for the Space Station Freedom and for the US economy

    Get PDF
    The progress made by levels 1, 2, and 3 of the Office of Space Station in developing and applying advanced automation and robotics technology is described. Emphasis is placed upon the Space Station Freedom Program responses to specific recommendations made in the Advanced Technology Advisory Committee (ATAC) progress report 10, the flight telerobotic servicer, and the Advanced Development Program. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for the Space Station Freedom

    Skill-based Shared Control

    Get PDF

    An optimization-based formalism for shared autonomy in dynamic environments

    Get PDF
    Teleoperation is an integral component of various industrial processes. For example, concrete spraying, assisted welding, plastering, inspection, and maintenance. Often these systems implement direct control that maps interface signals onto robot motions. Successful completion of tasks typically requires high levels of manual dexterity and cognitive load. In addition, the operator is often present nearby dangerous machinery. Consequently, safety is of critical importance and training is expensive and prolonged -- in some cases taking several months or even years. An autonomous robot replacement would be an ideal solution since the human could be removed from danger and training costs significantly reduced. However, this is currently not possible due to the complexity and unpredictability of the environments, and the levels of situational and contextual awareness required to successfully complete these tasks. In this thesis, the limitations of direct control are addressed by developing methods for shared autonomy. A shared autonomous approach combines human input with autonomy to generate optimal robot motions. The approach taken in this thesis is to formulate shared autonomy within an optimization framework that finds optimized states and controls by minimizing a cost function, modeling task objectives, given a set of (changing) physical and operational constraints. Online shared autonomy requires the human to be continuously interacting with the system via an interface (akin to direct control). The key challenges addressed in this thesis are: 1) ensuring computational feasibility (such a method should be able to find solutions fast enough to achieve a sampling frequency bound below by 40Hz), 2) being reactive to changes in the environment and operator intention, 3) knowing how to appropriately blend operator input and autonomy, and 4) allowing the operator to supply input in an intuitive manner that is conducive to high task performance. Various operator interfaces are investigated with regards to the control space, called a mode of teleoperation. Extensive evaluations were carried out to determine for which modes are most intuitive and lead to highest performance in target acquisition tasks (e.g. spraying/welding/etc). Our performance metrics quantified task difficulty based on Fitts' law, as well as a measure of how well constraints affecting the task performance were met. The experimental evaluations indicate that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations. In addition, our multivariate analysis indicated that those with regular exposure to computer games achieved higher performance. Shared autonomy aims to relieve human operators of the burden of precise motor control, tracking, and localization. An optimization-based representation for shared autonomy in dynamic environments was developed. Real-time tractability is ensured by modulating the human input with information of the changing environment within the same task space, instead of adding it to the optimization cost or constraints. The method was illustrated with two real world applications: grasping objects in cluttered environments and spraying tasks requiring sprayed linings with greater homogeneity. Maintaining motion patterns -- referred to as skills -- is often an integral part of teleoperation for various industrial processes (e.g. spraying, welding, plastering). We develop a novel model-based shared autonomous framework for incorporating the notion of skill assistance to aid operators to sustain these motion patterns whilst adhering to environment constraints. In order to achieve computational feasibility, we introduce a novel parameterization for state and control that combines skill and underlying trajectory models, leveraging a special type of curve known as Clothoids. This new parameterization allows for efficient computation of skill-based short term horizon plans, enabling the use of a model predictive control loop. Our hardware realization validates the effectiveness of our method to recognize a change of intended skill, and showing an improved quality of output motion, even under dynamically changing obstacles. In addition, extensions of the work to supervisory control are described. An exploratory study presents an approach that improves computational feasibility for complex tasks with minimal interactive effort on the part of the human. Adaptations are theorized which might allow such a method to be applicable and beneficial to high degree of freedom systems. Finally, a system developed in our lab is described that implements sliding autonomy and shown to complete multi-objective tasks in complex environments with minimal interaction from the human

    Modulating Human Input for Shared Autonomy in Dynamic Environments

    Get PDF

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Cutaneous Force Feedback as a Sensory Subtraction Technique in Haptics

    Full text link
    A novel sensory substitution technique is presented. Kinesthetic and cutaneous force feedback are substituted by cutaneous feedback (CF) only, provided by two wearable devices able to apply forces to the index finger and the thumb, while holding a handle during a teleoperation task. The force pattern, fed back to the user while using the cutaneous devices, is similar, in terms of intensity and area of application, to the cutaneous force pattern applied to the finger pad while interacting with a haptic device providing both cutaneous and kinesthetic force feedback. The pattern generated using the cutaneous devices can be thought as a subtraction between the complete haptic feedback (HF) and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction instead of sensory substitution. A needle insertion scenario is considered to validate the approach. The haptic device is connected to a virtual environment simulating a needle insertion task. Experiments show that the perception of inserting a needle using the cutaneous-only force feedback is nearly indistinguishable from the one felt by the user while using both cutaneous and kinesthetic feedback. As most of the sensory substitution approaches, the proposed sensory subtraction technique also has the advantage of not suffering from stability issues of teleoperation systems due, for instance, to communication delays. Moreover, experiments show that the sensory subtraction technique outperforms sensory substitution with more conventional visual feedback (VF)
    corecore