54 research outputs found

    Learn and Transfer Knowledge of Preferred Assistance Strategies in Semi-autonomous Telemanipulation

    Full text link
    Enabling robots to provide effective assistance yet still accommodating the operator's commands for telemanipulation of an object is very challenging because robot's assistive action is not always intuitive for human operators and human behaviors and preferences are sometimes ambiguous for the robot to interpret. Although various assistance approaches are being developed to improve the control quality from different optimization perspectives, the problem still remains in determining the appropriate approach that satisfies the fine motion constraints for the telemanipulation task and preference of the operator. To address these problems, we developed a novel preference-aware assistance knowledge learning approach. An assistance preference model learns what assistance is preferred by a human, and a stagewise model updating method ensures the learning stability while dealing with the ambiguity of human preference data. Such a preference-aware assistance knowledge enables a teleoperated robot hand to provide more active yet preferred assistance toward manipulation success. We also developed knowledge transfer methods to transfer the preference knowledge across different robot hand structures to avoid extensive robot-specific training. Experiments to telemanipulate a 3-finger hand and 2-finger hand, respectively, to use, move, and hand over a cup have been conducted. Results demonstrated that the methods enabled the robots to effectively learn the preference knowledge and allowed knowledge transfer between robots with less training effort

    An investigation into the cognitive effects of delayed visual feedback

    Get PDF
    Abstract unavailable please refer to PD

    Mitigation Of Motion Sickness Symptoms In 360 Degree Indirect Vision Systems

    Get PDF
    The present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced “general effects” in an uncoupled motion environment. Specifically, several visual display manipulations of a 360° indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely separate from the target detection route. Results of a multiple regression analysis determined that the Dual Banners display incorporating an artificial horizon (i.e., AH Dual Banners) and perceived attentional control significantly contributed to the outcome of total severity of motion sickness, as measured by the Simulator Sickness Questionnaire (SSQ). Altogether, 33.6% (adjusted) of the variability in Total Severity was predicted by the variables used in the model. Objective measures were assessed prior to, during and after uncoupled motion. These tests involved performance while immersed in the environment (i.e., target detection and situation awareness), as well as postural stability and cognitive and visual assessment tests (i.e., Grammatical Reasoning and Manikin) both before and after immersion. Response time to Grammatical Reasoning actually decreased after uncoupled motion. However, this was the only significant difference of all the performance measures. Assessment of subjective workload (as measured by NASA-TLX) determined that participants in Dual Banners display conditions had a significantly lower level of perceived physical demand than those with Completely Separated display designs. Further, perceived iv temporal demand was lower for participants exposed to conditions incorporating an artificial horizon. Subjective sickness (SSQ Total Severity, Nausea, Oculomotor and Disorientation) was evaluated using non-parametric tests and confirmed that the AH Dual Banners display had significantly lower Total Severity scores than the Completely Separated display with no artificial horizon (i.e., NoAH Completely Separated). Oculomotor scores were also significantly different for these two conditions, with lower scores associated with AH Dual Banners. The NoAH Completely Separated condition also had marginally higher oculomotor scores when compared to the Completely Separated display incorporating the artificial horizon (AH Completely Separated). There were no significant differences of sickness symptoms or severity (measured by self-assessment, postural stability, and cognitive and visual tests) between display designs 30- and 60-minutes post-exposure. Further, 30- and 60- minute post measures were not significantly different from baseline scores, suggesting that aftereffects were not present up to 60 minutes post-exposure. It was concluded that incorporating an artificial horizon onto the Dual Banners display will be beneficial in mitigating symptoms of motion sickness in manned ground vehicles using 360° indirect vision systems. Screening for perceived attentional control will also be advantageous in situations where selection is possible. However, caution must be made in generalizing these results to missions under terrain or vehicle speed different than what is used for this study, as well as those that include a longer immersion time

    Trust in Robots

    Get PDF
    Robots are increasingly becoming prevalent in our daily lives within our living or working spaces. We hope that robots will take up tedious, mundane or dirty chores and make our lives more comfortable, easy and enjoyable by providing companionship and care. However, robots may pose a threat to human privacy, safety and autonomy; therefore, it is necessary to have constant control over the developing technology to ensure the benevolent intentions and safety of autonomous systems. Building trust in (autonomous) robotic systems is thus necessary. The title of this book highlights this challenge: “Trust in robots—Trusting robots”. Herein, various notions and research areas associated with robots are unified. The theme “Trust in robots” addresses the development of technology that is trustworthy for users; “Trusting robots” focuses on building a trusting relationship with robots, furthering previous research. These themes and topics are at the core of the PhD program “Trust Robots” at TU Wien, Austria

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Novel Methods For Human-robot Shared Control In Collaborative Robotics

    Get PDF
    Blended shared control is a method to continuously combine control inputs from traditional automatic control systems and human operators for control of machines. An automatic control system generates control input based on feedback of measured signals, whereas a human operator generates control input based on experience, task knowledge, and awareness and sensing of the environment in which the machine is operating. Such active blending of inputs from the automatic control agent and the human agent to jointly control machines is expected to provide benefits in terms of utilizing the unique features of both agents, i.e., better task execution performance of automatic control systems based on sensed signals and maintaining situation awareness by having the human in the loop to handle safety concerns and environmental uncertainties. The shared control approach in this sense provides an alternative to full autonomy. Many existing and future applications of such an approach include automobiles, underwater vehicles, ships, airplanes, construction machines, space manipulators, surgery robots, and power wheelchairs, where machines are still mostly operated by human operators for safety concerns. Developing machines for full autonomy requires not only advances in machines but also the ability to sense the environment by placing sensors in it; the latter could be a very difficult task for many such applications due to perceived uncertainties and changing conditions. The notion of blended shared control, as a more practical alternative to full autonomy, enables keeping the human operator in the loop to initiate machine actions with real-time intelligent assistance provided by automatic control. The problem of how to blend the two inputs and development of associated scientific tools to formalize and achieve blended shared control is the focus of this work. Specifically, the following essential aspects are investigated and studied. Task learning: modeling of a human-operated robotic task from demonstration into subgoals such that execution patterns are captured in a simple manner and provide reference for human intent prediction and automatic control generation. Intent prediction: prediction of human operator's intent in the framework of subgoal models such that it encodes the probability of a human operator seeking a particular subgoal. Input blending: generating automatic control input and dynamically combining it with human operator's input based on prediction probability; this will also account for situations where the human operator may take unexpected actions to avoid danger by yielding full control authority to the human operator. Subgoal adjustment: adjusting the learned, nominal task model dynamically to adapt to task changes, such as changes to target object, which will cause the nominal model learned from demonstration to lose its effectiveness. This dissertation formalizes these notions and develops novel tools and algorithms for enabling blended shared control. To evaluate the developed scientific tools and algorithms, a scaled hydraulic excavator for a typical trenching and truck-loading task is employed as a specific example. Experimental results are provided to corroborate the tools and methods. To expand the developed methods and further explore shared control with different applications, this dissertation also studied the collaborative operation of robot manipulators. Specifically, various operational interfaces are systematically designed, a hybrid force-motion controller is integrated with shared control in a mixed world-robot frame to facilitate human-robot collaboration, and a method that utilizes vision-based feedback to predict the human operator's intent and provides shared control assistance is proposed. These methods provide ways for human operators to remotely control robotic manipulators effectively while receiving assistance by intelligent shared control in different applications. Several robotic manipulation experiments were conducted to corroborate the expanded shared control methods by utilizing different industrial robots

    Semi-Autonomous Control of an Exoskeleton using Computer Vision

    Get PDF
    corecore