718 research outputs found
Interactive and cooperative sensing and control for advanced teleoperation
This paper presents the paradigm of interactive and cooperative sensing and control as a fundamental mechanism of integrating and fusing the strengths of man and machine for advanced teleoperation. The interactive and cooperative sensing and control is considered as an extended and generalized form of traded and shared control. The emphasis of interactive and cooperative sensing and control is given to the distribution of mutually nonexclusive subtasks to man and machine, the interactive invocation of subtasks under the man/machine symbiotic relationship, and the fusion of information and decisionmaking between man and machine according to their confidence measures. The proposed interactive and cooperative sensing and control system is composed of such major functional blocks as the logical sensor system, the sensor-based local autonomy, the virtual environment formation, and the cooperative decision-making between man and machine. The Sensing-Knowledge-Command (SKC) fusion network is proposed as a fundamental architecture for implementing cooperative and interactive sensing and control. Simulation results are shown
NASA space station automation: AI-based technology review
Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
Automation and robotics for the Space Exploration Initiative: Results from Project Outreach
A total of 52 submissions were received in the Automation and Robotics (A&R) area during Project Outreach. About half of the submissions (24) contained concepts that were judged to have high utility for the Space Exploration Initiative (SEI) and were analyzed further by the robotics panel. These 24 submissions are analyzed here. Three types of robots were proposed in the high scoring submissions: structured task robots (STRs), teleoperated robots (TORs), and surface exploration robots. Several advanced TOR control interface technologies were proposed in the submissions. Many A&R concepts or potential standards were presented or alluded to by the submitters, but few specific technologies or systems were suggested
Recent Advancements in Augmented Reality for Robotic Applications: A Survey
Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement
Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery
State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm With an Identical Twin Master
In this article, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this article, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side
Recommended from our members
Redesigning the human-robot interface : intuitive teleoperation of anthropomorphic robots
textA novel interface for robotic teleoperation was developed to enable accurate and highly efficient teleoperation of the Industrial Reconfigurable Anthropomorphic Dual-arm (IRAD) system and other robotic systems. In order to achieve a revolutionary increase in operator productivity, the bilateral/master-slave approach must give way to shared autonomy and unilateral control; autonomy must be employed where possible, and appropriate sensory feedback only where autonomy is impossible; and today’s low-information/high feedback model must be replaced by one that emphasizes feedforward precision and minimal corrective feedback. This is emphasized for task spaces outside of the traditional anthropomorphic scale such as mobile manipulation (i.e. large task spaces) and high precision tasks (i.e. very small task spaces). The system is demonstrated using an anthropomorphically dimensioned industrial manipulator working in task spaces from one meter to less than one millimeter, in both simulation and hardware. This thesis discusses the design requirements and philosophy of this interface, provides a summary of prototype teleoperation hardware, simulation environment, test-bed hardware, and experimental results.Mechanical Engineerin
Overview of some Command Modes for Human-Robot Interaction Systems
Interaction and command modes as well as their combination are essential features of modern and futuristic robotic systems interacting with human beings in various dynamical environments. This paper presents a synthetic overview concerning the most command modes used in Human-Robot Interaction Systems (HRIS). It includes the first historical command modes which are namely tele-manipulation, off-line robot programming, and traditional elementary teaching by demonstration. It then introduces the most recent command modes which have been fostered later on by the use of artificial intelligence techniques implemented on more powerful computers. In this context, we will consider specifically the following modes: interactive programming based on the graphical-user-interfaces, voice-based, pointing-on-image-based, gesture-based, and finally brain-based commands.info:eu-repo/semantics/publishedVersio
- …