96 research outputs found

    Impedance Modulation for Negotiating Control Authority in a Haptic Shared Control Paradigm

    Full text link
    Communication and cooperation among team members can be enhanced significantly with physical interaction. Successful collaboration requires the integration of the individual partners' intentions into a shared action plan, which may involve a continuous negotiation of intentions and roles. This paper presents an adaptive haptic shared control framework wherein a human driver and an automation system are physically connected through a motorized steering wheel. By virtue of haptic feedback, the driver and automation system can monitor each other actions and can still intuitively express their control intentions. The objective of this paper is to develop a systematic model for an automation system that can vary its impedance such that the control authority can transit between the two agents intuitively and smoothly. To this end, we defined a cost function that not only ensures the safety of the collaborative task but also takes account of the assistive behavior of the automation system. We employed a predictive controller based on modified least square to modulate the automation system impedance such that the cost function is optimized. The results demonstrate the significance of the proposed approach for negotiating the control authority, specifically when humans and automation are in a non-cooperative mode. Furthermore, the performance of the adaptive haptic shared control is compared with the traditional fixed automation impedance haptic shared control paradigm.Comment: Final Manuscript Accepted in the 2020 American Control Conference (ACC

    Optimal Torque and Stiffness Control in Compliantly Actuated Robots

    Get PDF
    Abstract — Anthropomorphic robots that aim to approach human performance agility and efficiency are typically highly redundant not only in their kinematics but also in actuation. Variable-impedance actuators, used to drive many of these devices, are capable of modulating torque and passive impedance (stiffness and/or damping) simultaneously and independently. Here, we propose a framework for simultaneous optimisation of torque and impedance (stiffness) profiles in order to optimise task performance, tuned to the complex hardware and incorporating real-world constraints. Simulation and hardware experiments validate the viability of this approach to complex, state dependent constraints and demonstrate task performance benefits of optimal temporal impedance modulation. Index Terms — Variable-stiffness actuation, physical constraints, optimal control

    Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

    Get PDF
    Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals

    Design, fabrication and control of soft robots

    Get PDF
    Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics.National Science Foundation (U.S.) (Grant IIS-1226883

    Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    Get PDF
    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement

    Symbiotic human-robot collaborative assembly

    Get PDF
    • 

    corecore