815 research outputs found
Neural Dynamics of Delayed Feedback in Robot Teleoperation: Insights from fNIRS Analysis
As robot teleoperation increasingly becomes integral in executing tasks in
distant, hazardous, or inaccessible environments, the challenge of operational
delays remains a significant obstacle. These delays are inherent in signal
transmission and processing and can adversely affect the operators performance,
particularly in tasks requiring precision and timeliness. While current
research has made strides in mitigating these delays through advanced control
strategies and training methods, a crucial gap persists in understanding the
neurofunctional impacts of these delays and the efficacy of countermeasures
from a cognitive perspective. Our study narrows this gap by leveraging
functional Near-Infrared Spectroscopy (fNIRS) to examine the neurofunctional
implications of simulated haptic feedback on cognitive activity and motor
coordination under delayed conditions. In a human-subject experiment (N=41), we
manipulated sensory feedback to observe its influences on various brain regions
of interest (ROIs) response during teleoperation tasks. The fNIRS data provided
a detailed assessment of cerebral activity, particularly in ROIs implicated in
time perception and the execution of precise movements. Our results reveal that
certain conditions, which provided immediate simulated haptic feedback,
significantly optimized neural functions related to time perception and motor
coordination, and improved motor performance. These findings provide empirical
evidence about the neurofunctional basis of the enhanced motor performance with
simulated synthetic force feedback in the presence of teleoperation delays.Comment: Submitted to Frontiers in Human Neuroscienc
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 341)
This bibliography lists 133 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during September 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
Electroencephalography (EEG), electromyography (EMG) and eye-tracking for astronaut training and space exploration
The ongoing push to send humans back to the Moon and to Mars is giving rise
to a wide range of novel technical solutions in support of prospective
astronaut expeditions. Against this backdrop, the European Space Agency (ESA)
has recently launched an investigation into unobtrusive interface technologies
as a potential answer to such challenges. Three particular technologies have
shown promise in this regard: EEG-based brain-computer interfaces (BCI) provide
a non-invasive method of utilizing recorded electrical activity of a user's
brain, electromyography (EMG) enables monitoring of electrical signals
generated by the user's muscle contractions, and finally, eye tracking enables,
for instance, the tracking of user's gaze direction via camera recordings to
convey commands. Beyond simply improving the usability of prospective technical
solutions, our findings indicate that EMG, EEG, and eye-tracking could also
serve to monitor and assess a variety of cognitive states, including attention,
cognitive load, and mental fatigue of the user, while EMG could furthermore
also be utilized to monitor the physical state of the astronaut. In this paper,
we elaborate on the key strengths and challenges of these three enabling
technologies, and in light of ESA's latest findings, we reflect on their
applicability in the context of human space flight. Furthermore, a timeline of
technological readiness is provided. In so doing, this paper feeds into the
growing discourse on emerging technology and its role in paving the way for a
human return to the Moon and expeditions beyond the Earth's orbit
Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 290)
This bibliography lists 125 reports, articles and other documents introduced into the NASA scientific and technical information system in October 1986
Teleoperation control of Baxter robot using Kalman filter-based sensor fusion
Kalman filter has been successfully applied to fuse the motion capture data collected from Kinect sensor and a pair of MYO armbands to teleoperate a robot. A new strategy utilizing the vector approach has been developed to accomplish a specific motion capture task. The arm motion of the operator is captured by a Kinect sensor and programmed with Processing software. Two MYO armbands with the inertial measurement unit embedded are worn on the operator's arm, which is used to detect the upper arm motion of the human operator. This is utilized to recognize and to calculate the precise speed of the physical motion of the operator's arm. User Datagram Protocol is employed to send the human movement to a simulated Baxter robot arm for teleoperation. In order to obtain joint angles for human limb utilizing vector approach, RosPy and Python script programming has been utilized. A series of experiments have been conducted to test the performance of the proposed technique, which provides the basis for the teleoperation of simulated Baxter robot
The Utility of Measures of Attention and Situation Awareness for Quantifying Telepresence
Telepresence is defined as the sensation of being present at a remote robot task site while physically present at a local control station. This concept has received substantial attention in the recent past as a result of hypothesized benefits of presence experiences on human task performance with teleoperation systems. Human factors research, however, has made little progress in establishing a relationship between the concept of telepresence and teleoperator performance. This has been attributed to the multidimensional nature of telepresence, the lack of appropriate studies to elucidate this relationship, and the lack of a valid and reliable, objective measure of telepresence. Subjective measures (e.g., questionnaires, rating scales) are most commonly used to measure telepresence. Objective measures have been proposed, including behavioral responses to stimuli presented in virtual worlds (e.g. ducking virtual objects). Other research has suggested use of physiological measures, such as cardiovascular responses to indicate the extent of telepresence experiences in teleoperation tasks. The objective of the present study was to assess the utility of using measures of attention allocation and situation awareness (SA) to objectively describe telepresence. Attention and SA have been identified as cognitive constructs potentially underlying telepresence experiences. Participants in this study performed a virtual mine neutralization task involving remote control of a simulated robotic rover and integrated tools to locate, uncover, and dispose of mines. Subjects simultaneously completed two secondary tasks that required them to monitor for low battery signals associated with operation of the vehicle and controls. Subjects were divided into three groups of eight according to task difficulty, which was manipulated by varying the number, and spacing, of mines in the task environment. Performance was measured as average time to neutralize four mines. Telepresence was assessed using a Presence questionnaire. Situation awareness was measured using the Situation Awareness Global Assessment Technique. Attention was measured as a ratio of the number of ?low battery signal detections to the total number of signals presented through the secondary task displays. Analysis of variance results revealed level of difficulty to significantly affect performance time and telepresence. Regression analysis revealed level of difficulty, immersive tendencies, and attention to explain significant portions of the variance in telepresence
Enhanced teleoperation performance using hybrid control and virtual fixture
To develop secure, natural and effective teleoperation, the perception of the slave plays a key role for the interaction of a human operator with the environment. By sensing slave information, the human operator can choose the correct operation in a process during the human–robot interaction.This paper develops an integrated scheme based on a hybrid control and virtual fixture approach for the telerobot. The human operator can sense the slave interaction condition and adjust the master device via the surface electromyographic signal. This hybrid control method integrates the proportional-derivative control and the variable stiffness control, and involves the muscle activation at the same time. It is proposed to quantitatively analyse the human operator’s control demand to enhance the control performance of the teleoperation system. In addition, due to unskilful operation and muscle physiological tremor of the human operator, a virtual fixture method is developed to ensure accuracy of operation and to reduce the operation pressure on the human operator. Experimental results demonstrated the effectiveness of the proposed method for the teleoperated robot
Learning Algorithm Design for Human-Robot Skill Transfer
In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character
Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation
We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (ʻMadgwick’) algorithm increasing accuracy and robustness while preserving computa- tional efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is pro- ven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demon- strating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measure- ment and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field
- …