354 research outputs found

    iMOVE: Development of a hybrid control interface based on sEMG and movement signals for an assistive robotic manipulator

    Get PDF
    For many people with upper limb disabilities, simple activities of daily living such as drinking, opening a door, or pushing an elevator button require the assistance of a caregiver; which reduces the independence of the individual. Assistive robotic systems controlled via human-robot interface could enable these people to perform this kind of tasks autonomously again and thereby increase their independence and quality of life. Moreover, this interface could encourage rehabilitation of motor functions because the individual would require to perform its remaining body movements and muscle activity to provide control signals. This project aims at developing a novel hybrid control interface that combines remaining movements and muscle activity of the upper body to control position and impedance of a robotic manipulator. This thesis presents a Cartesian position control system for KINOVA Gen3 robotic arm, which performs a proportional-derivative control low based to the Jacobian transpose method, that does not require inverse kinematics. A second control is proposed to change the robot’s rigidity in real-time based on measurements of muscle activity (sEMG). This control allows the user to modulate the robot’s impedance while performing a task. Moreover, it presents a body-machine interface that maps the motions of the upper body (head and shoulders) to the space of robot control signals. Its uses the principal component analysis algorithm for dimensionality reduction. The results demonstrate that combining the three methods presented above, the user can control robot positions with head and shoulders movements, while also adapting the robot’s impedance depending on its muscle activation. In the future work the performance of this system is going to be tested in patients with severe movement impairments

    Development of an EMG-controlled mobile robot

    Get PDF
    This paper presents the development of a Robot Operating System (ROS)-based mobile robot control using electromyography (EMG) signals. The proposed robot’s structure is specifically designed to provide modularity and is controlled by a Raspberry Pi 3 running on top of an ROS application and a Teensy microcontroller. The EMG muscle commands are sent to the robot with hand gestures that are captured using a Thalmic Myo Armband and recognized using a k-Nearest Neighbour (k-NN) classifier. The robot’s performance is evaluated by navigating it through specific paths while solely controlling it through the EMG signals and using the collision avoidance approach. Thus, this paper aims to expand the research on the topic, introducing a more accurate classification system with a wider set of gestures, hoping to come closer to a usable real-life applicatio

    Teleoperation control of Baxter robot using Kalman filter-based sensor fusion

    Get PDF
    Kalman filter has been successfully applied to fuse the motion capture data collected from Kinect sensor and a pair of MYO armbands to teleoperate a robot. A new strategy utilizing the vector approach has been developed to accomplish a specific motion capture task. The arm motion of the operator is captured by a Kinect sensor and programmed with Processing software. Two MYO armbands with the inertial measurement unit embedded are worn on the operator's arm, which is used to detect the upper arm motion of the human operator. This is utilized to recognize and to calculate the precise speed of the physical motion of the operator's arm. User Datagram Protocol is employed to send the human movement to a simulated Baxter robot arm for teleoperation. In order to obtain joint angles for human limb utilizing vector approach, RosPy and Python script programming has been utilized. A series of experiments have been conducted to test the performance of the proposed technique, which provides the basis for the teleoperation of simulated Baxter robot

    Multi-Operator Gesture Control of Robotic Swarms Using Wearable Devices

    Get PDF
    The theory and design of effective interfaces for human interaction with multi-robot systems has recently gained significant interest. Robotic swarms are multi-robot systems where local interactions between robots and neighbors within their spatial neighborhood generate emergent collective behaviors. Most prior work has studied interfaces for human interaction with remote swarms, but swarms also have great potential in applications working alongside humans, motivating the need for interfaces for local interaction. Given the collective nature of swarms, human interaction may occur at many levels of abstraction ranging from swarm behavior selection to teleoperation. Wearable gesture control is an intuitive interaction modality that can meet this requirement while keeping operator hands usually unencumbered. In this paper, we present an interaction method using a gesture-based wearable device with a limited number of gestures for robust control of a complex system: a robotic swarm. Experiments conducted with a real robot swarm compare performance in single and two-operator conditions illustrating the effectiveness of the method. Results show human operators using our interaction method are able to successfully complete the task in all trials, illustrating the effectiveness of the method, with better performance in the two-operator condition, indicating separation of function is beneficial for our method. The primary contribution of our work is the development and demonstration of interaction methods that allow robust control of a difficult to understand multi robot system using only the noisy inputs typical of smartphones and other on-body sensor driven devices

    Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals

    Get PDF
    Traditionally robots are controlled using devices like joysticks, keyboards, mice and other similar human computer interface (HCI) devices. Although this approach is effective and practical for some cases, it is restrictive only to healthy individuals without disabilities, and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI). This work presents a novel concept of using human bio-signals to control swarms of robots. With this concept there are two major advantages: Firstly, it gives amputees and people with certain disabilities the ability to control robotic swarms, which has previously not been possible. Secondly, it also gives the user a more intuitive interface to control swarms of robots by using gestures, thoughts, and eye movement. We measure different bio-signals from the human body including Electroencephalography (EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf products. After minimal signal processing, we then decode the intended control action using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest Neighbors (K-NN). We employ formation controllers based on distance and displacement to control the shape and motion of the robotic swarm. Comparison for ground truth for thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles

    A robot learning method with physiological interface for teleoperation systems

    Get PDF
    The human operator largely relies on the perception of remote environmental conditions to make timely and correct decisions in a prescribed task when the robot is teleoperated in a remote place. However, due to the unknown and dynamic working environments, the manipulator's performance and efficiency of the human-robot interaction in the tasks may degrade significantly. In this study, a novel method of human-centric interaction, through a physiological interface was presented to capture the information details of the remote operation environments. Simultaneously, in order to relieve workload of the human operator and to improve efficiency of the teleoperation system, an updated regression method was proposed to build up a nonlinear model of demonstration for the prescribed task. Considering that the demonstration data were of various lengths, dynamic time warping algorithm was employed first to synchronize the data over time before proceeding with other steps. The novelty of this method lies in the fact that both the task-specific information and the muscle parameters from the human operator have been taken into account in a single task; therefore, a more natural and safer interaction between the human and the robot could be achieved. The feasibility of the proposed method was demonstrated by experimental results
    • …
    corecore