218 research outputs found

    The Effects Of Video Frame Delay And Spatial Ability On The Operation Of Multiple Semiautonomous And Tele-operated Robots

    Get PDF
    The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation\u27s wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community

    Neural Dynamics of Delayed Feedback in Robot Teleoperation: Insights from fNIRS Analysis

    Full text link
    As robot teleoperation increasingly becomes integral in executing tasks in distant, hazardous, or inaccessible environments, the challenge of operational delays remains a significant obstacle. These delays are inherent in signal transmission and processing and can adversely affect the operators performance, particularly in tasks requiring precision and timeliness. While current research has made strides in mitigating these delays through advanced control strategies and training methods, a crucial gap persists in understanding the neurofunctional impacts of these delays and the efficacy of countermeasures from a cognitive perspective. Our study narrows this gap by leveraging functional Near-Infrared Spectroscopy (fNIRS) to examine the neurofunctional implications of simulated haptic feedback on cognitive activity and motor coordination under delayed conditions. In a human-subject experiment (N=41), we manipulated sensory feedback to observe its influences on various brain regions of interest (ROIs) response during teleoperation tasks. The fNIRS data provided a detailed assessment of cerebral activity, particularly in ROIs implicated in time perception and the execution of precise movements. Our results reveal that certain conditions, which provided immediate simulated haptic feedback, significantly optimized neural functions related to time perception and motor coordination, and improved motor performance. These findings provide empirical evidence about the neurofunctional basis of the enhanced motor performance with simulated synthetic force feedback in the presence of teleoperation delays.Comment: Submitted to Frontiers in Human Neuroscienc

    Towards Semi-Autonomous Robotic Arm Manipulation Operator Intention Detection from Forces Feedback

    Full text link
    In harsh environments such as those found in nuclear facilities, the use of robotic systems is crucial for performing tasks that would otherwise require human intervention. This is done to minimize the risk of human exposure to dangerous levels of radiation, which can have severe consequences for health and even be fatal. However, the telemanipulation systems employed in these environments are becoming increasingly intricate, relying heavily on sophisticated control methods and local master devices. Consequently, the cognitive burden on operators during labor-intensive tasks is growing. To tackle this challenge, operator intention detection based on task learning can greatly enhance the performance of robotic tasks while reducing the reliance on human effort in teleoperation, particularly in a glovebox environment. By accurately predicting the operator's intentions, the robot can carry out tasks more efficiently and effectively, with minimal input from the operator. In this regard, we propose the utilization of Convolutional Neural Networks, a machine learning approach, to learn and forecast the operator's intentions using raw force feedback spatiotemporal data. Through our experimental study on glovebox tasks for nuclear applications, such as radiation survey and object grasping, we have achieved promising outcomes. Our approach holds the potential to enhance the safety and efficiency of robotic systems in harsh environments, thus diminishing the risk of human exposure to radiation while simultaneously improving the precision and speed of robotic operations

    Autonomous robotic intracardiac catheter navigation using haptic vision

    Get PDF
    International audienceWhile all minimally invasive procedures involve navigating from a small incision in the skin to the site of the intervention, it has not been previously demonstrated how this can be done 10 autonomously. To show that autonomous navigation is possible, we investigated it in the hardest place to do it-inside the beating heart. We created a robotic catheter that can navigate through the blood-filled heart using wall-following algorithms inspired by positively thigmotactic animals. The catheter employs haptic vision, a hybrid sense using imaging for both touch-based surface identification and force sensing, to accomplish wall following inside the blood-filled heart. 15 Through in vivo animal experiments, we demonstrate that the performance of an autonomously-controlled robotic catheter rivals that of an experienced clinician. Autonomous navigation is a fundamental capability on which more sophisticated levels of autonomy can be built, e.g., to perform a procedure. Similar to the role of automation in fighter aircraft, such capabilities can free the clinician to focus on the most critical aspects of the procedure while providing precise and 20 repeatable tool motions independent of operator experience and fatigue

    Plan recognition for space telerobotics

    Get PDF
    Current research on space telerobots has largely focused on two problem areas: executing remotely controlled actions (the tele part of telerobotics) or planning to execute them (the robot part). This work has largely ignored one of the key aspects of telerobots: the interaction between the machine and its operator. For this interaction to be felicitous, the machine must successfully understand what the operator is trying to accomplish with particular remote-controlled actions. Only with the understanding of the operator's purpose for performing these actions can the robot intelligently assist the operator, perhaps by warning of possible errors or taking over part of the task. There is a need for such an understanding in the telerobotics domain and an intelligent interface being developed in the chemical process design domain addresses the same issues

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    Cooperative robot and user friendly robot- new challenge in robotics

    Get PDF
    In the near future many aspect of our life will be encompassed by tasks performing in cooperation with robot. The application of robot in home automation, agriculture production and medical operations etc will indispensable. As a result robot needs to be made human-friendly and to execute tasks in cooperation with human. Researchers proposed many new field of research in Robotics. Cooperative robotics and User friendly robotics are two new area of robotics research. Some researcher is trying to make human like robot. Robots that will be imitate human characteristics in movement, learning etc. Other researchers trying to develop robots which will be entertain human. Another group trying to develop robots and/or control system or robots those will be work cooperatively. In this paper it is tried to gather information regarding these two fields in brief

    Knowledge representation to enable high-level planning in cloth manipulation tasks

    Get PDF
    Cloth manipulation is very relevant for domestic robotic tasks, but it presents many challenges due to the complexity of representing, recognizing and predicting the behaviour of cloth under manipulation. In this work, we propose a generic, compact and simplified representation of the states of cloth manipulation that allows for representing tasks as sequences of states and transitions semantically. We also define a Cloth Manipulation Graph that encodes all the strategies to accomplish a task. Our novel representation is used to encode two different cloth manipulation tasks, learned from an experiment with human subjects manipulating clothes with video data. We show how our simplified representation allows to obtain a map of meaningful steps that can serve to describe cloth manipulation tasks as domain models in PDDL, enabling high-level planning. Finally, we discuss on the existing skills that could enable the sensory motor grounding and the low-level execution of the plan.Peer ReviewedPostprint (published version
    corecore