141 research outputs found

    Human-Like Impedance and Minimum Effort Control for Natural and Efficient Manipulation

    Get PDF
    Humans incorporate and switch between learnt neuromotor strategies while performing complex tasks. Towards this purpose, kinematic redundancy is exploited in order to achieve optimized performance. Inspired by the superior motor skills of humans, in this paper, we investigate a combined free motion and interaction controller in a certain class of robotic manipulation. In this bimodal controller, kinematic degrees of redundancy are adapted according to task-suitable dynamic costs. The proposed algorithm attributes high priority to minimum-effort controller while performing point to point free space movements. Once the robot comes in contact with the environment, the Tele-Impedance, common mode and configuration dependent stiffness (CMS-CDS) controller will replicate the human’s estimated endpoint stiffness and measured equilibrium position profiles in the slave robotic arm, in real-time. Results of the proposed controller in contact with the environment are compared with the ones derived from Tele-Impedance implemented using torque based classical Cartesian stiffness control. The minimum-effort and interaction performance achieved highlights the possibility of adopting human-like and sophisticated strategies in humanoid robots or the ones with adequate degrees of redundancy, in order to accomplish tasks in a certain class of robotic manipulatio

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    Strategies for control of neuroprostheses through Brain-Machine Interfaces

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (p. 145-153).The concept of brain controlled machines sparks our imagination with many exciting possibilities. One potential application is in neuroprostheses for paralyzed patients or amputees. The quality of life of those who have extremely limited motor abilities can potentially be improved if we have a means of inferring their motor intent from neural signals and commanding a robotic device that can be controlled to perform as a smart prosthesis. In our recent demonstration of such Brain Machine Interfaces (BMIs) monkeys were able to control a robot arm in 3-D motion directly, due to advances in accessing, recording, and decoding electrical activity of populations of single neurons in the brain, together with algorithms for driving robotic devices with the decoded neural signals in real time. However, such demonstrations of BMI thus far have been limited to simple position control of graphical cursors or robots in free space with non-human primates. There still remain many challenges in reducing this technology to practice in a neuroprosthesis for humans. The research in this thesis introduces strategies for optimizing the information extracted from the recorded neural signals, so that a practically viable and ultimately useful neuroprosthesis can be achieved. A framework for incorporating robot sensors and reflex like behavior has been introduced in the form of Continuous Shared Control. The strategy provides means for more steady and natural movement by compensating for the natural reflexes that are absent in direct brain control. The Muscle Activation Method, an alternative decoding algorithm for extracting motor parameters from the neural activity, has been presented.(cont.) The method allows the prosthesis to be controlled under impedance control, which is similar to how our natural limbs are controlled. Using this method, the prosthesis can perform a much wider range in of tasks in partially known and unknown environments. Finally preparations have been made for clinical trials with humans, which would signify a major step in reaching the ultimate goal of human brain operated machines.by Hyun K. Kim.Ph.D

    Exploring Teleimpedance and Tactile Feedback for Intuitive Control of the Pisa/IIT SoftHand

    Get PDF
    This paper proposes a teleimpedance controller with tactile feedback for more intuitive control of the Pisa/IIT SoftHand. With the aim to realize a robust, efficient and low-cost hand prosthesis design, the SoftHand is developed based on the motor control principle of synergies, through which the immense complexity of the hand is simplified into distinct motor patterns. Due to the built-in flexibility of the hand joints, as the SoftHand grasps, it follows a synergistic path while allowing grasping of objects of various shapes using only a single motor. The DC motor of the hand incorporates a novel teleimpedance control in which the user's postural and stiffness synergy references are tracked in real-time. In addition, for intuitive control of the hand, two tactile interfaces are developed. The first interface (mechanotactile) exploits a disturbance observer which estimates the interaction forces in contact with the grasped object. Estimated interaction forces are then converted and applied to the upper arm of the user via a custom made pressure cuff. The second interface employs vibrotactile feedback based on surface irregularities and acceleration signals and is used to provide the user with information about the surface properties of the object as well as detection of object slippage while grasping. Grasp robustness and intuitiveness of hand control were evaluated in two sets of experiments. Results suggest that incorporating the aforementioned haptic feedback strategies, together with user-driven compliance of the hand, facilitate execution of safe and stable grasps, while suggesting that a low-cost, robust hand employing hardware-based synergies might be a good alternative to traditional myoelectric prostheses

    Dyadic behavior in co-manipulation :from humans to robots

    Get PDF
    To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço físico de um humano, quanto aumentar a percepção de um ambiente por um robô, um díade humano-robô pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos são eficientes trabalhando juntos, a abordagem deste trabalho é a de investigar díades humano-humano co-manipulando um objeto compartilhado. A co-manipulação é avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com díades humano-humano foi projetado no qual cada humano é instruído a se comportar como um líder, um seguidor, ou simplesmente agir tão naturalmente quanto possível. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável é apresentado neste trabalho. A rigidez desejada é continuamente variada com base em uma função escalar suave que define o grau de liderança do robô. Além disso, o controlador é analisado por meio de simulações, e sua estabilidade é analisada pela teoria de Lyapunov. As trajetórias resultantes do uso do controlador mostraram um padrão de comportamento muito parecido ao do experimento com díades humano-humano

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Get PDF
    Human-robot collaboration (HRC) is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios

    Electromyography Based Human-Robot Interfaces for the Control of Artificial Hands and Wearable Devices

    Get PDF
    The design of robotic systems is currently facing human-inspired solutions as a road to replicate the human ability and flexibility in performing motor tasks. Especially for control and teleoperation purposes, the human-in-the-loop approach is a key element within the framework know as Human-Robot Interface. This thesis reports the research activity carried out for the design of Human-Robot Interfaces based on the detection of human motion intentions from surface electromyography. The main goal was to investigate intuitive and natural control solutions for the teleoperation of both robotic hands during grasping tasks and wearable devices during elbow assistive applications. The design solutions are based on the human motor control principles and surface electromyography interpretation, which are reviewed with emphasis on the concept of synergies. The electromyography based control strategies for the robotic hand grasping and the wearable device assistance are also reviewed. The contribution of this research for the control of artificial hands rely on the integration of different levels of the motor control synergistic organization, and on the combination of proportional control and machine learning approaches under the guideline of user-centred intuitiveness in the Human-Robot Interface design specifications. From the side of the wearable devices, the control of a novel upper limb assistive device based on the Twisted String Actuation concept is faced. The contribution regards the assistance of the elbow during load lifting tasks, exploring a simplification in the use of the surface electromyography within the design of the Human-Robot Interface. The aim is to work around complex subject-dependent algorithm calibrations required by joint torque estimation methods

    Force-Sensor-Less Bilateral Teleoperation Control of Dissimilar Master-Slave System With Arbitrary Scaling

    Get PDF
    This study designs a high-precision bilateral teleoperation control for a dissimilar master-slave system. The proposed nonlinear control design takes advantage of a novel subsystem-dynamics-based control method that allows designing of individual (decentralized) model-based controllers for the manipulators locally at the subsystem level. Very importantly, a dynamic model of the human operator is incorporated into the control of the master manipulator. The individual controllers for the dissimilar master and slave manipulators are connected in a specific communication channel for the bilateral teleoperation to function. Stability of the overall control design is rigorously guaranteed with arbitrary time delays. Novel features of this study include the completely force-sensor-less design for the teleoperation system with a solution for a uniquely introduced computational algebraic loop, a method of estimating the exogenous operating force of an operator and the use of a commercial haptic manipulator. Most importantly, we conduct experiments on a dissimilar system in two degrees of freedom (DOFs). As an illustration of the performance of the proposed system, a force scaling factor of up to 800 and position scaling factor of up to 4 was used in the experiments. The experimental results show an exceptional tracking performance, verifying the real-world performance of the proposed concept.publishedVersionPeer reviewe
    corecore