373,680 research outputs found

    Neural control for constrained human-robot interaction with human motion intention estimation and impedance learning

    Get PDF
    In this paper, an impedance control strategy is proposed for a rigid robot collaborating with human by considering impedance learning and human motion intention estimation. The least square method is used in human impedance identification, and the robot can adjust its impedance parameters according to human impedance model for guaranteeing compliant collaboration. Neural networks (NNs) are employed in human motion intention estimation, so that the robot follows the human actively and human partner costs less control effort. On the other hand, the full-state constraints are considered for operational safety in human-robot interactive processes. Neural control is presented in the control strategy to deal with the dynamic uncertainties and improve the system robustness. Simulation results are carried out to show the effectiveness of the proposed control design

    Intelligent Systems Approach for Automated Identification of Individual Control Behavior of a Human Operator

    Get PDF
    Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator

    Predictive Control Model to Simulate Humanoid Gait: Predictive Control Model to Simulate Humanoid Gait

    Get PDF
    This article reveals a new approach to model a humanoid gait consist of  5 and 7 links and  studying the influence of feet on the overall gait dynamics. Estimated trajectories of limbs have been planned systematically based on equation of motion and their following interpretation for the human movements from their joints and muscles. The human motion is controlled by the central nervous system (CNS) based on model predictive control (MPC). In our projected representations, MPC controller analyses the essential moments at the joints, and these ideal moments are applied to the muscles. furthermore, MPC controller acts the role of the spinal cord in the humanoid CNS. The outcomes of simulation are compared with several examples of real humanoid gait, gained from motion captured systems. According to comparison, the possibility of additional use of the model for individual identification and acknowledgement of gait eccentricities are predictable

    Kinematic synthesis for smart hand prosthetics

    Get PDF
    The dream of a bionic replacement appendage is becoming reality through the use of mechatronic prostheses that utilize the body’s myoelectric signals. This paper presents a process to accurately capture the motion of the human hand joints; the obtained information is to be used in conjunction with myoelectric signal identification for motion control. In this work, the human hand is modeled as a set of links connected by joints, which are approximated to standard revolute joints. Using the methods of robotics, the motion of each finger is described as a serial robot, and expressed as Clifford algebra exponentials. This representation allows us to use the model to perform kinematic synthesis, that is, to adapt the model to the dimensions of real hands and to obtain the angles at each joint, using visual data from real motion captured with several cameras. The goal is to obtain an adaptable motion tracking system that can follow as many different motions as possible with sufficient accuracy, in order to relate the individual motions to myoelectric signals in future work.Postprint (author’s final draft

    Twelfth Annual Conference on Manual Control

    Get PDF
    Main topics discussed cover multi-task decision making, attention allocation and workload measurement, displays and controls, nonvisual displays, tracking and other psychomotor tasks, automobile driving, handling qualities and pilot ratings, remote manipulation, system identification, control models, and motion and visual cues. Sixty-five papers are included with presentations on results of analytical studies to develop and evaluate human operator models for a range of control task, vehicle dynamics and display situations; results of tests of physiological control systems and applications to medical problems; and on results of simulator and flight tests to determine display, control and dynamics effects on operator performance and workload for aircraft, automobile, and remote control systems

    Evaluating Effectiveness of Modeling Motion System Feedback in the Enhanced Hess Structural Model of the Human Operator

    Get PDF
    In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model

    Contact aware robust semi-autonomous teleoperation of mobile manipulators

    Get PDF
    In the context of human-robot collaboration, cooperation and teaming, the use of mobile manipulators is widespread on applications involving unpredictable or hazardous environments for humans operators, like space operations, waste management and search and rescue on disaster scenarios. Applications where the manipulator's motion is controlled remotely by specialized operators. Teleoperation of manipulators is not a straightforward task, and in many practical cases represent a common source of failures. Common issues during the remote control of manipulators are: increasing control complexity with respect the mechanical degrees of freedom; inadequate or incomplete feedback to the user (i.e. limited visualization or knowledge of the environment); predefined motion directives may be incompatible with constraints or obstacles imposed by the environment. In the latter case, part of the manipulator may get trapped or blocked by some obstacle in the environment, failure that cannot be easily detected, isolated nor counteracted remotely. While control complexity can be reduced by the introduction of motion directives or by abstraction of the robot motion, the real-time constraint of the teleoperation task requires the transfer of the least possible amount of data over the system's network, thus limiting the number of physical sensors that can be used to model the environment. Therefore, it is of fundamental to define alternative perceptive strategies to accurately characterize different interaction with the environment without relying on specific sensory technologies. In this work, we present a novel approach for safe teleoperation, that takes advantage of model based proprioceptive measurement of the robot dynamics to robustly identify unexpected collisions or contact events with the environment. Each identified collision is translated on-the-fly into a set of local motion constraints, allowing the exploitation of the system redundancies for the computation of intelligent control laws for automatic reaction, without requiring human intervention and minimizing the disturbance of the task execution (or, equivalently, the operator efforts). More precisely, the described system consist in two different building blocks. The first, for detecting unexpected interactions with the environment (perceptive block). The second, for intelligent and autonomous reaction after the stimulus (control block). The perceptive block is responsible of the contact event identification. In short, the approach is based on the claim that a sensorless collision detection method for robot manipulators can be extended to the field of mobile manipulators, by embedding it within a statistical learning framework. The control deals with the intelligent and autonomous reaction after the contact or impact with the environment occurs, and consist on an motion abstraction controller with a prioritized set of constrains, where the highest priority correspond to the robot reconfiguration after a collision is detected; when all related dynamical effects have been compensated, the controller switch again to the basic control mode

    Modeling and control of a bedside cable-driven lower-limb rehabilitation robot for bedridden individuals

    Get PDF
    Individuals with acute neurological or limb-related disorders may be temporarily bedridden and unable to go to the physical therapy departments. The rehabilitation training of these patients in the ward can only be performed manually by therapists because the space in inpatient wards is limited. This paper proposes a bedside cable-driven lower-limb rehabilitation robot based on the sling exercise therapy theory. The robot can actively drive the hip and knee motions at the bedside using flexible cables linking the knee and ankle joints. A human–cable coupling controller was designed to improve the stability of the human–machine coupling system. The controller dynamically adjusts the impedance coefficient of the cable driving force based on the impedance identification of the human lower-limb joints, thus realizing the stable motion of the human body. The experiments with five participants showed that the cable-driven rehabilitation robot effectively improved the maximum flexion of the hip and knee joints, reaching 85° and 90°, respectively. The mean annulus width of the knee joint trajectory was reduced by 63.84%, and the mean oscillation of the ankle joint was decreased by 56.47%, which demonstrated that human joint impedance identification for cable-driven control can effectively stabilize the motion of the human–cable coupling system
    • …
    corecore