2,840 research outputs found
A spatial impedance controller for robotic manipulation
Mechanical impedance is the dynamic generalization of stiffness, and determines interactive behavior by definition. Although the argument for explicitly controlling impedance is strong, impedance control has had only a modest impact on robotic manipulator control practice. This is due in part to the fact that it is difficult to select suitable impedances given tasks. A spatial impedance controller is presented that simplifies impedance selection. Impedance is characterized using ¿spatially affine¿ families of compliance and damping, which are characterized by nonspatial and spatial parameters. Nonspatial parameters are selected independently of configuration of the object with which the robot must interact. Spatial parameters depend on object configurations, but transform in an intuitive, well-defined way. Control laws corresponding to these compliance and damping families are derived assuming a commonly used robot model. While the compliance control law was implemented in simulation and on a real robot, this paper emphasizes the underlying theor
The astronaut and the banana peel: An EVA retriever scenario
To prepare for the problem of accidents in Space Station activities, the Extravehicular Activity Retriever (EVAR) robot is being constructed, whose purpose is to retrieve astronauts and tools that float free of the Space Station. Advanced Decision Systems is at the beginning of a project to develop research software capable of guiding EVAR through the retrieval process. This involves addressing problems in machine vision, dexterous manipulation, real time construction of programs via speech input, and reactive execution of plans despite the mishaps and unexpected conditions that arise in uncontrolled domains. The problem analysis phase of this work is presented. An EVAR scenario is used to elucidate major domain and technical problems. An overview of the technical approach to prototyping an EVAR system is also presented
Force-based control for human-robot cooperative object manipulation
In Physical Human-Robot Interaction (PHRI), humans and robots share the workspace and physically interact and collaborate to perform a common task. However, robots do not have human levels of intelligence or the capacity to adapt in performing collaborative tasks. Moreover, the presence of humans in the vicinity of the robot requires ensuring their safety, both in terms of software and hardware. One of the aspects related to safety is the stability of the human-robot control system, which can be placed in jeopardy due to several factors such as internal time delays. Another aspect is the mutual understanding between humans and robots to prevent conflicts in performing a task. The kinesthetic transmission of the human intention is, in general, ambiguous when an object is involved, and the robot cannot distinguish the human intention to rotate from the intention to translate (the translation/rotation problem).This thesis examines the aforementioned issues related to PHRI. First, the instability arising due to a time delay is addressed. For this purpose, the time delay in the system is modeled with the exponential function, and the effect of system parameters on the stability of the interaction is examined analytically. The proposed method is compared with the state-of-the-art criteria used to study the stability of PHRI systems with similar setups and high human stiffness. Second, the unknown human grasp position is estimated by exploiting the interaction forces measured by a force/torque sensor at the robot end effector. To address cases where the human interaction torque is non-zero, the unknown parameter vector is augmented to include the human-applied torque. The proposed method is also compared via experimental studies with the conventional method, which assumes a contact point (i.e., that human torque is equal to zero). Finally, the translation/rotation problem in shared object manipulation is tackled by proposing and developing a new control scheme based on the identification of the ongoing task and the adaptation of the robot\u27s role, i.e., whether it is a passive follower or an active assistant. This scheme allows the human to transport the object independently in all degrees of freedom and also reduces human effort, which is an important factor in PHRI, especially for repetitive tasks. Simulation and experimental results clearly demonstrate that the force required to be applied by the human is significantly reduced once the task is identified
Recommended from our members
A Haptic Surface Robot Interface for Large-Format Touchscreen Displays
This thesis presents the design for a novel haptic interface for large-format touchscreens. Techniques such as electrovibration, ultrasonic vibration, and external braked devices have been developed by other researchers to deliver haptic feedback to touchscreen users. However, these methods do not address the need for spatial constraints that only restrict user motion in the direction of the constraint. This technology gap contributes to the lack of haptic technology available for touchscreen-based upper-limb rehabilitation, despite the prevalent use of haptics in other forms of robotic rehabilitation. The goal of this thesis is to display kinesthetic haptic constraints to the touchscreen user in the form of boundaries and paths, which assist or challenge the user in interacting with the touchscreen. The presented prototype accomplishes this by steering a single wheel in contact with the display while remaining driven by the user. It employs a novel embedded force sensor, which it uses to measure the interaction force between the user and the touchscreen. The haptic response of the device is controlled using this force data to characterize user intent. The prototype can operate in a simulated free mode as well as simulate rigid and compliant obstacles and path constraints. A data architecture has been created to allow the prototype to be used as a peripheral add-on device which reacts to haptic environments created and modified on the touchscreen. The long-term goal of this work is to create a haptic system that enables a touchscreen-based rehabilitation platform for people with upper limb impairments
Brain computer interface based robotic rehabilitation with online modification of task speed
We present a systematic approach that enables online modification/adaptation of robot assisted rehabilitation exercises by continuously monitoring intention levels of patients utilizing an electroencephalogram (EEG) based Brain-Computer Interface (BCI). In particular, we use Linear Discriminant Analysis (LDA) to classify event-related synchronization (ERS) and desynchronization (ERD) patterns associated with motor imagery; however, instead of providing a binary classification output, we utilize posterior probabilities extracted from LDA classifier as the continuous-valued outputs to control a rehabilitation robot. Passive velocity field control (PVFC) is used as the underlying robot controller to map instantaneous levels of motor imagery during the movement to the speed of contour following tasks. In other words, PVFC changes the speed of contour following tasks with respect to intention levels of motor imagery. PVFC also allows decoupling of the task and the speed of the task from each other, and ensures coupled stability of the overall robot patient system. The proposed framework is implemented on AssistOn-Mobile - a series elastic actuator based on a holonomic mobile platform, and feasibility studies with healthy volunteers have been conducted test effectiveness of the proposed approach. Giving patients online control over the speed of the task, the proposed approach ensures active involvement of patients throughout exercise routines and has the potential to increase the efficacy of robot assisted therapies
Haptic Transparency and Interaction Force Control for a Lower-Limb Exoskeleton
Controlling the interaction forces between a human and an exoskeleton is
crucial for providing transparency or adjusting assistance or resistance
levels. However, it is an open problem to control the interaction forces of
lower-limb exoskeletons designed for unrestricted overground walking. For these
types of exoskeletons, it is challenging to implement force/torque sensors at
every contact between the user and the exoskeleton for direct force
measurement. Moreover, it is important to compensate for the exoskeleton's
whole-body gravitational and dynamical forces, especially for heavy lower-limb
exoskeletons. Previous works either simplified the dynamic model by treating
the legs as independent double pendulums, or they did not close the loop with
interaction force feedback.
The proposed whole-exoskeleton closed-loop compensation (WECC) method
calculates the interaction torques during the complete gait cycle by using
whole-body dynamics and joint torque measurements on a hip-knee exoskeleton.
Furthermore, it uses a constrained optimization scheme to track desired
interaction torques in a closed loop while considering physical and safety
constraints. We evaluated the haptic transparency and dynamic interaction
torque tracking of WECC control on three subjects. We also compared the
performance of WECC with a controller based on a simplified dynamic model and a
passive version of the exoskeleton. The WECC controller results in a
consistently low absolute interaction torque error during the whole gait cycle
for both zero and nonzero desired interaction torques. In contrast, the
simplified controller yields poor performance in tracking desired interaction
torques during the stance phase.Comment: 17 pages, 12 figure
- …