362 research outputs found

    Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots

    Get PDF
    This paper shows and evaluates a novel approach to integrate a non-invasive Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to mentally drive a telepresence robot. Controlling a mobile device by using human brain signals might improve the quality of life of people suffering from severe physical disabilities or elderly people who cannot move anymore. Thus, the BCI user is able to actively interact with relatives and friends located in different rooms thanks to a video streaming connection to the robot. To facilitate the control of the robot via BCI, we explore new ROS-based algorithms for navigation and obstacle avoidance, making the system safer and more reliable. In this regard, the robot can exploit two maps of the environment, one for localization and one for navigation, and both can be used also by the BCI user to watch the position of the robot while it is moving. As demonstrated by the experimental results, the user's cognitive workload is reduced, decreasing the number of commands necessary to complete the task and helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference on Robotics and Automatio

    Brain–Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot

    Get PDF
    This paper presents a teleoperation control for an exoskeleton robotic system based on the brain-machine interface and vision feedback. Vision compressive sensing, brain-machine reference commands, and adaptive fuzzy controllers in joint-space have been effectively integrated to enable the robot performing manipulation tasks guided by human operator's mind. First, a visual-feedback link is implemented by a video captured by a camera, allowing him/her to visualize the manipulator's workspace and movements being executed. Then, the compressed images are used as feedback errors in a nonvector space for producing steady-state visual evoked potentials electroencephalography (EEG) signals, and it requires no prior information on features in contrast to the traditional visual servoing. The proposed EEG decoding algorithm generates control signals for the exoskeleton robot using features extracted from neural activity. Considering coupled dynamics and actuator input constraints during the robot manipulation, a local adaptive fuzzy controller has been designed to drive the exoskeleton tracking the intended trajectories in human operator's mind and to provide a convenient way of dynamics compensation with minimal knowledge of the dynamics parameters of the exoskeleton robot. Extensive experiment studies employing three subjects have been performed to verify the validity of the proposed method

    Electroencephalography (EEG), electromyography (EMG) and eye-tracking for astronaut training and space exploration

    Full text link
    The ongoing push to send humans back to the Moon and to Mars is giving rise to a wide range of novel technical solutions in support of prospective astronaut expeditions. Against this backdrop, the European Space Agency (ESA) has recently launched an investigation into unobtrusive interface technologies as a potential answer to such challenges. Three particular technologies have shown promise in this regard: EEG-based brain-computer interfaces (BCI) provide a non-invasive method of utilizing recorded electrical activity of a user's brain, electromyography (EMG) enables monitoring of electrical signals generated by the user's muscle contractions, and finally, eye tracking enables, for instance, the tracking of user's gaze direction via camera recordings to convey commands. Beyond simply improving the usability of prospective technical solutions, our findings indicate that EMG, EEG, and eye-tracking could also serve to monitor and assess a variety of cognitive states, including attention, cognitive load, and mental fatigue of the user, while EMG could furthermore also be utilized to monitor the physical state of the astronaut. In this paper, we elaborate on the key strengths and challenges of these three enabling technologies, and in light of ESA's latest findings, we reflect on their applicability in the context of human space flight. Furthermore, a timeline of technological readiness is provided. In so doing, this paper feeds into the growing discourse on emerging technology and its role in paving the way for a human return to the Moon and expeditions beyond the Earth's orbit

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    Haptic Bimanual System for Teleoperation of Time-Delayed Tasks

    Get PDF

    Haptic Bimanual System for Teleoperation of Time-Delayed Tasks

    Get PDF
    This paper presents a novel teleoperation system, which has been designed to address challenges in the remote control of spaceborne bimanual robotic tasks. The primary interest for designing this system is to assess and increase the efficacy of users performing bimanual tasks, while ensuring the safety of the system and minimising the user's mental load. This system consists of two seven-axis robots that are remotely controlled through two haptic control interfaces. The mental load of the user is monitored using a head-mounted interface, which collects eye gaze data and provides components for the holographic user interface. The development of this system enables the safe execution of tasks remotely, which is a critical building block for developing and deploying future space missions as well as other high-risk tasks

    Human-Machine Interfaces for Service Robotics

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen
    • 

    corecore