47 research outputs found

    Optimal Torque and Stiffness Control in Compliantly Actuated Robots

    Get PDF
    Abstract — Anthropomorphic robots that aim to approach human performance agility and efficiency are typically highly redundant not only in their kinematics but also in actuation. Variable-impedance actuators, used to drive many of these devices, are capable of modulating torque and passive impedance (stiffness and/or damping) simultaneously and independently. Here, we propose a framework for simultaneous optimisation of torque and impedance (stiffness) profiles in order to optimise task performance, tuned to the complex hardware and incorporating real-world constraints. Simulation and hardware experiments validate the viability of this approach to complex, state dependent constraints and demonstrate task performance benefits of optimal temporal impedance modulation. Index Terms — Variable-stiffness actuation, physical constraints, optimal control

    Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

    Get PDF
    Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals

    Design, fabrication and control of soft robots

    Get PDF
    Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics.National Science Foundation (U.S.) (Grant IIS-1226883

    Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    Get PDF
    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement

    Progress and prospects of the human–robot collaboration

    Get PDF
    Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human–robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community’s recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human–robot-environment interactions by incorporating the collaborative partners’ superior capabilities, e.g. human’s cognitive and robot’s physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human–robot collaboration

    Two Mode Impedance Control of Velma Service Robot Redundant Arm

    No full text

    Comparison of object-level grasp controllers for dynamic dexterous manipulation

    No full text
    Object-level control of a dexterous robot hand provides an intuitive high-level interface to solve fine manipulation tasks. In the past, many algorithms were proposed based on a weighted pseudoinverse of the grasp map. In a different approach, Stramigioli introduced a virtual-object based controller – called an Intrinsically Passive Controller (IPC). These controllers are reviewed and compared. A new controller that is similar to the IPC but using a virtual frame rather than a virtual object is subsequently proposed. The controllers are compared with respect to their object force distribution, the extendability to N fingers, the ease of specifying the object-level impedance and grasp forces, the dimensionality of the coupling springs, the internal controller dynamics, and the computational effort. Controllers for robotic hands usually implement only stiffness controllers and do not program the damping. We address how to choose and implement damping as a function of the desired object-level stiffness and the effective hand–object inertia. The evaluation reveals the dynamic effects of fast motions, which should not be neglected for the design of grasp controllers in practice. The application of the controllers to the torque-controlled DLR Hand II is employed to compare their effectiveness in experiments
    corecore