629 research outputs found

    Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach

    Full text link
    We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency

    Comparison of Semi-autonomous Mobile Robot Control Strategies in Presence of Large Delay Fluctuation

    Get PDF
    We propose semi-autonomous control strategies to assist in the teleoperation of mobile robots under unstable communication conditions. A short-term autonomous control system is the assistance in the semi-autonomous control strategies, when the teleoperation is compromised. The short-term autonomous control comprises of lateral and longitudinal functions. The lateral control is based on an artificial potential field method where obstacles are repulsive, and a route is attractive. LiDAR-based artificial potential field methods are well studied. We present a novel artificial potential field method based on color and depth images. Benefit of a camera system compared to a LiDAR is that a camera detects color, is cheaper, and does not have moving parts. Moreover, utilization of active sensors is not desired in the particle accelerator environment. A set of experiments with a robot prototype are carried out to validate this system. The experiments are carried out in an environment which mimics the accelerator tunnel environment. The difficulty of the teleoperation is altered with obstacles. Fully manual and autonomous control are compared with the proposed semi-autonomous control strategies. The results show that the teleoperation is improved with autonomous, delay-dependent, and control-dependent assist compared to the fully manual control. Based on the operation time, control-dependent assist performed the best, reducing the time by 12% on the tunnel section with most obstacles. The presented system can be easily applied to common industrial robots operating e.g. in warehouses or factories due to hardware simplicity and light computational demand.Peer reviewe

    Skill-based Shared Control

    Get PDF

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    Modeling and Improving Teleoperation Performance of Semi-Autonomous Wheeled Robots

    Full text link
    Robotics and unmanned vehicles have allowed us to interact with environments in ways that were impossible decades ago. As perception, decision making, and control improve, it becomes possible to automate more parts of robot operation. However, humans will remain a critical part of robot control based on preference, ethical, and technical reasons. An ongoing question will be when and how to pair humans and automation to create semi-autonomous systems. The answer to this question depends on numerous factors such as the robot's task, platform, environment conditions, and the user. The work in this dissertation focuses on modeling the impact of these factors on performance and developing improved semi-autonomous control schemes, so that robot systems can be better designed. Experiments and analysis focus on wheeled robots, however the approach taken and many of the trends could be applied to a variety of platforms. Wheeled robots are often teleoperated over wireless communication networks. While this arrangement may be convenient, it introduces many challenges including time-varying delays and poor perception of the robot's environment that can lead to the robot colliding with objects or rolling over. With regards to semi-autonomous control, rollover prevention and obstacle avoidance behaviors are considered. In this area, two contributions are presented. The first is a rollover prevention method that uses an existing manipulator arm on-board a wheeled robot. The second is a method of approximating convex obstacle free regions for use in optimal control path planning problems. Teleoperation conditions, including communication delays, automation, and environment layout, are considered in modeling robot operation performance. From these considerations stem three contributions. The first is a method of relating driving performance among different communication delay distributions. The second parameterizes how driving through different arrangements of obstacles relates to performance. Lastly, based on user studies, teleoperation performance is related to different conditions of communication delay, automation level, and environment arrangement. The contributions of this dissertation will assist roboticists to implement better automation and understand when to use automation.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136951/1/jgstorms_1.pd

    Virtual and Mixed Reality in Telerobotics: A Survey

    Get PDF

    Using haptic feedback in human swarm interaction

    Get PDF
    A swarm of robots is a large group of individual agents that autonomously coordinate via local control laws. Their emergent behavior allows simple robots to accomplish complex tasks. Since missions may have complex objectives that change dynamically due to environmental and mission changes, human control and influence over the swarm is needed. The field of Human Swarm Interaction (HSI) is young, with few user studies, and even fewer papers focusing on giving non-visual feedback to the operator. The authors will herein present a background of haptics in robotics and swarms and two studies that explore various conditions under which haptic feedback may be useful in HSI. The overall goal of the studies is to explore the effectiveness of haptic feedback in the presence of other visual stimuli about the swarm system. The findings show that giving feedback about nearby obstacles using a haptic device can improve performance, and that a combination of feedback from obstacle forces via the visual and haptic channels provide the best performance

    Human Management of the Hierarchical System for the Control of Multiple Mobile Robots

    Get PDF
    In order to take advantage of autonomous robotic systems, and yet ensure successful completion of all feasible tasks, we propose a mediation hierarchy in which an operator can interact at all system levels. Robotic systems are not robust in handling un-modeled events. Reactive behaviors may be able to guide the robot back into a modeled state and to continue. Reasoning systems may simply fail. Once a system has failed it is difficult to re-start the task from the failed state. Rather, the rule base is revised, programs altered, and the task re-tried from the beginning

    Application of Simultaneous Localization and Mapping Algorithms for Haptic Teleoperation of Aerial Vehicles

    Get PDF
    In this thesis, a new type of haptic teleoperator system for remote control of Unmanned Aerial Vehicles (UAVs) has been developed, where the Simultaneous Localization and Mapping (SLAM) algorithms are implemented for the purpose of generating the haptic feedback. Specifically, the haptic feedback is provided to the human operator through interaction with artificial potential field built around the obstacles in the virtual environment which is located at the master site of the teleoperator system. The obstacles in the virtual environment replicate essential features of the actual remote environment where the UAV executes its tasks. The state of the virtual environment is generated and updated in real time using Extended Kalman Filter SLAM algorithms based on measurements performed by the UAV in the actual remote environment. Two methods for building haptic feedback from SLAM algorithms have been developed. The basic SLAM-based haptic feedback algorithm uses fixed size potential field around the obstacles, while the robust SLAM-based haptic feedback algorithm changes the size of potential field around the obstacle depending on the amount of uncertainty in obstacle location, which is represented by the covariance estimate provided by EKF. Simulations and experimental results are presented that evaluate the performance of the proposed teleoperator system

    On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles

    Get PDF
    Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlässige Zustandsschätzung und Algorithmen zur Vermeidung von Kollisionen. In dieser Dissertation präsentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen Räumen. Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet. Diese Ausstattung genügt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue Zustandsschätzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezüglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation. Ein Komplementärfilter berechnet die Höhe der Drohne, während ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert. Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur Verfügung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen. Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschätzten Zustandes des Roboters. Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prädiktiven Regelung. Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden. Die Plattform wurde experimentell sowohl in einer räumlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei Testflügen in offener Umgebung mit natürlichen Hindernissen wie z.B. Bäume getestet. Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen. Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch. In dieser Arbeit präsentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen. Via Software-in-the-loop-Simulation konnte der Zustandsschätzer mit Hilfe simulierter Sensoren und zuvor aufgenommener Datensätze verbessert werden. Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfügbarer ROS-Simulator) mit zusätzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen. Ebenso können wir damit die Echtzeitfähigkeit der Algorithmen direkt auf der Hardware validieren und verifizieren. Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators. Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von Gewichtsbeschränkungen nicht auf diese Unterstützung zurückgreifen. Eine Fixierung der Kamera verursacht, während der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeinträchtigt damit negativ die Manövrierbarkeit des Roboters. Viele wissenschaftliche Arbeiten beschäftigen sich mit der Lösung dieses Problems durch Feature-Tracking. Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU. Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann. Ebenso präsentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln. Unser Konzept erlaubt die Neigung der Propellerblätter unabhängig von der Ausrichtung des Roboters mit Hilfe zweier zusätzlicher Aktuatoren. Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestätigen unsere Überlegungen und heben die Verbesserung der Manövrierfähigkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup. Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware. Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors
    corecore