1,858 research outputs found
On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles
Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlÀssige ZustandsschÀtzung und Algorithmen zur Vermeidung von Kollisionen.
In dieser Dissertation prÀsentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen RÀumen.
Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet.
Diese Ausstattung genĂŒgt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue ZustandsschĂ€tzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezĂŒglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation.
Ein KomplementÀrfilter berechnet die Höhe der Drohne, wÀhrend ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert.
Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur VerfĂŒgung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen.
Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschÀtzten Zustandes des Roboters.
Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prÀdiktiven Regelung.
Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden.
Die Plattform wurde experimentell sowohl in einer rĂ€umlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei TestflĂŒgen in offener Umgebung mit natĂŒrlichen Hindernissen wie z.B. BĂ€ume getestet.
Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen.
Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes MaĂ an Zeit und Ressourcen in Anspruch.
In dieser Arbeit prÀsentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen.
Via Software-in-the-loop-Simulation konnte der ZustandsschÀtzer mit Hilfe simulierter Sensoren und zuvor aufgenommener DatensÀtze verbessert werden.
Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfĂŒgbarer ROS-Simulator) mit zusĂ€tzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen.
Ebenso können wir damit die EchtzeitfÀhigkeit der Algorithmen direkt auf der Hardware validieren und verifizieren.
Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators.
Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von GewichtsbeschrĂ€nkungen nicht auf diese UnterstĂŒtzung zurĂŒckgreifen.
Eine Fixierung der Kamera verursacht, wÀhrend der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeintrÀchtigt damit negativ die Manövrierbarkeit des Roboters.
Viele wissenschaftliche Arbeiten beschÀftigen sich mit der Lösung dieses Problems durch Feature-Tracking.
Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU.
Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann.
Ebenso prÀsentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln.
Unser Konzept erlaubt die Neigung der PropellerblÀtter unabhÀngig von der Ausrichtung des Roboters mit Hilfe zweier zusÀtzlicher Aktuatoren.
Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestĂ€tigen unsere Ăberlegungen und heben die Verbesserung der ManövrierfĂ€higkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions.
In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements.
An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup.
Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware.
Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAVâs on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Electronic Image Stabilization for Mobile Robotic Vision Systems
When a camera is affixed on a dynamic mobile robot, image stabilization is the first step towards more complex analysis on the video feed. This thesis presents a novel electronic image stabilization (EIS) algorithm for small inexpensive highly dynamic mobile robotic platforms with onboard camera systems. The algorithm combines optical flow motion parameter estimation with angular rate data provided by a strapdown inertial measurement unit (IMU). A discrete Kalman filter in feedforward configuration is used for optimal fusion of the two data sources. Performance evaluations are conducted by a simulated video truth model (capturing the effects of image translation, rotation, blurring, and moving objects), and live test data. Live data was collected from a camera and IMU affixed to the DAGSI Whegsâą mobile robotic platform as it navigated through a hallway. Template matching, feature detection, optical flow, and inertial measurement techniques are compared and analyzed to determine the most suitable algorithm for this specific type of image stabilization. Pyramidal Lucas-Kanade optical flow using Shi-Tomasi good features in combination with inertial measurement is the EIS algorithm found to be superior. In the presence of moving objects, fusion of inertial measurement reduces optical flow root-mean-squared (RMS) error in motion parameter estimates by 40%. No previous image stabilization algorithm to date directly fuses optical flow estimation with inertial measurement by way of Kalman filtering
Suspended Load Path Tracking Control Using a Tilt-rotor UAV Based on Zonotopic State Estimation
This work addresses the problem of path tracking control of a suspended load
using a tilt-rotor UAV. The main challenge in controlling this kind of system
arises from the dynamic behavior imposed by the load, which is usually coupled
to the UAV by means of a rope, adding unactuated degrees of freedom to the
whole system. Furthermore, to perform the load transportation it is often
needed the knowledge of the load position to accomplish the task. Since
available sensors are commonly embedded in the mobile platform, information on
the load position may not be directly available. To solve this problem in this
work, initially, the kinematics of the multi-body mechanical system are
formulated from the load's perspective, from which a detailed dynamic model is
derived using the Euler-Lagrange approach, yielding a highly coupled, nonlinear
state-space representation of the system, affine in the inputs, with the load's
position and orientation directly represented by state variables. A zonotopic
state estimator is proposed to solve the problem of estimating the load
position and orientation, which is formulated based on sensors located at the
aircraft, with different sampling times, and unknown-but-bounded measurement
noise. To solve the path tracking problem, a discrete-time mixed
controller with pole-placement constraints
is designed with guaranteed time-response properties and robust to unmodeled
dynamics, parametric uncertainties, and external disturbances. Results from
numerical experiments, performed in a platform based on the Gazebo simulator
and on a Computer Aided Design (CAD) model of the system, are presented to
corroborate the performance of the zonotopic state estimator along with the
designed controller
Modeling and Control for Vision Based Rear Wheel Drive Robot and Solving Indoor SLAM Problem Using LIDAR
abstract: To achieve the ambitious long-term goal of a feet of cooperating Flexible Autonomous
Machines operating in an uncertain Environment (FAME), this thesis addresses several
critical modeling, design, control objectives for rear-wheel drive ground vehicles.
Toward this ambitious goal, several critical objectives are addressed. One central objective of the thesis was to show how to build low-cost multi-capability robot platform
that can be used for conducting FAME research.
A TFC-KIT car chassis was augmented to provide a suite of substantive capabilities.
The augmented vehicle (FreeSLAM Robot) costs less than 2000.
All demonstrations presented involve rear-wheel drive FreeSLAM robot. The following
summarizes the key hardware demonstrations presented and analyzed:
(1)Cruise (v, ) control along a line,
(2) Cruise (v, ) control along a curve,
(3) Planar (x, y) Cartesian Stabilization for rear wheel drive vehicle,
(4) Finish the track with camera pan tilt structure in minimum time,
(5) Finish the track without camera pan tilt structure in minimum time,
(6) Vision based tracking performance with different cruise speed vx,
(7) Vision based tracking performance with different camera fixed look-ahead distance L,
(8) Vision based tracking performance with different delay Td from vision subsystem,
(9) Manually remote controlled robot to perform indoor SLAM,
(10) Autonomously line guided robot to perform indoor SLAM.
For most cases, hardware data is compared with, and corroborated by, model based
simulation data. In short, the thesis uses low-cost self-designed rear-wheel
drive robot to demonstrate many capabilities that are critical in order to reach the
longer-term FAME goal.Dissertation/ThesisDefense PresentationMasters Thesis Electrical Engineering 201
- âŠ