2,249 research outputs found

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    Get PDF

    On Sensor-Controlled Robotized One-off Manufacturing

    Get PDF
    A semi-automatic task oriented system structure has been developed and tested on an arc welding application. In normal industrial robot programming, the path is created and the process is based upon the decided path. Here a process-oriented method is proposed instead. It is natural to focus on the process, since the path is in reality a result of process needs. Another benefit of choosing process focus, is that it automatically leads us into task oriented thoughts, which in turn can be split in sub-tasks, one for each part of the process with similar process-characteristics. By carefully choosing and encapsulating the information needed to execute a sub-task, this component can be re-used whenever the actual subtask occurs. By using virtual sensors and generic interfaces to robots and sensors, applications built upon the system design do not change between simulation and actual shop floor runs. The system allows a mix of real- and simulated components during simulation and run-time

    Trust in Robots

    Get PDF
    Robots are increasingly becoming prevalent in our daily lives within our living or working spaces. We hope that robots will take up tedious, mundane or dirty chores and make our lives more comfortable, easy and enjoyable by providing companionship and care. However, robots may pose a threat to human privacy, safety and autonomy; therefore, it is necessary to have constant control over the developing technology to ensure the benevolent intentions and safety of autonomous systems. Building trust in (autonomous) robotic systems is thus necessary. The title of this book highlights this challenge: “Trust in robots—Trusting robots”. Herein, various notions and research areas associated with robots are unified. The theme “Trust in robots” addresses the development of technology that is trustworthy for users; “Trusting robots” focuses on building a trusting relationship with robots, furthering previous research. These themes and topics are at the core of the PhD program “Trust Robots” at TU Wien, Austria

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    Integrated Task and Motion Planning of Multi-Robot Manipulators in Industrial and Service Automation

    Get PDF
    Efficient coordination of several robot arms in order to carry out some given independent/cooperative tasks in a common workspace, avoiding collisions, is an appealing research problem that has been studied in different robotic fields, with industrial and service applications. Coordination of several robot arms in a shared environment is challenging because complexity of collision free path planning increases with the number of robots sharing the same workspace. Although research in different aspects of this problem such as task planning, motion planning and robot control has made great progress, the integration of these components is not well studied in the literature. This thesis focuses on integrating task and motion planning multi-robot-arm systems by introducing a practical and optimal interface layer for such systems. For a given set of speci fications and a sequence of tasks for a multi-arm system, the studied system design aims to automatically construct the necessary waypoints, the sequence of arms to be operated, and the algorithms required for the robots to reliably execute manipulation tasks. The contributions of the thesis are three-fold. First, an algorithm is introduced to integrate task and motion planning layers in order to achieve optimal and collision free task execution. Representation via shared space graph (SSG) is introduced to check whether two arms share certain parts of the workspace and to quantify cooperation of such arm pairs, which is essential in selection of arm sequence and scheduling of each arm in the sequence to perform a task or a sub-task. The introduced algorithm allows robots to autonomously reason about a structured environment, performs the sequence planning of robots to operate, and provides robots and objects path for each task to succeed a set of goals. Secondly, an integrated motion and task planning methodology is introduced for systems of multiple mobile and fixed base robot arms performing different tasks simultaneously in a shared workspace. We introduce concept of dynamic shared space graph (D-SSG) to continuously check whether two arms sharing certain parts of the workspace at different time steps and quantify cooperation of such arm pairs, which is essential to the selection of arm sequences and scheduling of each arm in the sequence to perform a task or a sub-task. The introduced algorithm allows robots to autonomously reason about complex human involving environments to plan the high level decisions (sequence planning) of robots to operate and calculates robots and objects path for each task to succeed a set of goals. The third contribution is design of an integration algorithm between low-level motion planning and high-level symbolic task planning layers to produce alternate plans in case of kinematic and geometric changes in the environment to prevent failure in the high-level task plan. In order to verify the methodological contributions of the thesis with a solid implementation basis, some implementations and tests are presented in the open-source robotics planning environments ROS, Moveit and Gazebo. Detailed analysis of these implementations and test results are provided as well

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    • …
    corecore