941 research outputs found

    Development of Non Expensive Technologies for Precise Maneuvering of Completely Autonomous Unmanned Aerial Vehicles

    Get PDF
    In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory

    GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical Flow Learning

    Full text link
    Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes.Comment: 12 pages. arXiv admin note: substantial text overlap with arXiv:2103.1372

    GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning

    Full text link
    Existing optical flow methods are erroneous in challenging scenes, such as fog, rain, and night because the basic optical flow assumptions such as brightness and gradient constancy are broken. To address this problem, we present an unsupervised learning approach that fuses gyroscope into optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Then, we design a self-guided fusion module to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. To the best of our knowledge, this is the first deep learning-based framework that fuses gyroscope data and image content for optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-art methods in both regular and challenging scenes

    Relative Pose Estimation Algorithm with Gyroscope Sensor

    Get PDF
    This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion) for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1) Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2) Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm

    Trends in Sighting Systems for Combat Vehicles

    Get PDF
    Search and tracking in dynamic condition, rapid re-targeting, precision pointing and long range engagement in day and night condition are core requisite of stabilised sighting systems used for combat vehicles. Complex battle field requires integrated fire control system with stabilised sighting system as its main constituent. It facilitates quick reaction to fire control system and provides vital edge in the battlefield scenario. Precision gimbal design, optics design, embedded engineering, control system, electro-optical sensors, target detection and tracking, panorama generation, auto-alerting, digital image stabilisation, image fusion and integration are important aspects of sighting system development. In this paper, design considerations for a state of art stabilised sighting system have been presented including laboratory and field evaluation methods for such systems

    Control System in Open-Source FPGA for a Self-Balancing Robot

    Get PDF
    Computing in technological applications is typically performed with software running on general-purpose microprocessors, such as the Computer Processing Unit (CPU), or specific ones, like the Graphical Processing Unit (GPU). Application-Specific Integrated Circuits (ASICs) are an interesting option when speed and reliability are required, but development costs are usually high. Field-Programmable Gate Arrays (FPGA) combine the flexibility of software with the high-speed operation of hardware, and can keep costs low. The dominant FPGA infrastructure is proprietary, but open tools have greatly improved and are a growing trend, from which robotics can benefit. This paper presents a robotics application that was fully developed using open FPGA tools. An inverted pendulum robot was designed, built, and programmed using open FPGA tools, such as IceStudio and the IceZum Alhambra board, which integrates the iCE40HX4K-TQ144 from Lattice. The perception from an inertial sensor is used in a PD control algorithm that commands two DC motors. All the modules were synthesized in an FPGA as a proof of concept. Its experimental validation shows good behavior and performance.This work was partially funded by the Community of Madrid through the RoboCity2030-III project (S2013/MIT-2748) and by the Spanish Ministry of Economy and Competitiveness through the RETOGAR project (TIN2016-76515-R)

    On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles

    Get PDF
    Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlässige Zustandsschätzung und Algorithmen zur Vermeidung von Kollisionen. In dieser Dissertation präsentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen Räumen. Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet. Diese Ausstattung genügt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue Zustandsschätzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezüglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation. Ein Komplementärfilter berechnet die Höhe der Drohne, während ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert. Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur Verfügung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen. Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschätzten Zustandes des Roboters. Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prädiktiven Regelung. Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden. Die Plattform wurde experimentell sowohl in einer räumlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei Testflügen in offener Umgebung mit natürlichen Hindernissen wie z.B. Bäume getestet. Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen. Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch. In dieser Arbeit präsentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen. Via Software-in-the-loop-Simulation konnte der Zustandsschätzer mit Hilfe simulierter Sensoren und zuvor aufgenommener Datensätze verbessert werden. Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfügbarer ROS-Simulator) mit zusätzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen. Ebenso können wir damit die Echtzeitfähigkeit der Algorithmen direkt auf der Hardware validieren und verifizieren. Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators. Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von Gewichtsbeschränkungen nicht auf diese Unterstützung zurückgreifen. Eine Fixierung der Kamera verursacht, während der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeinträchtigt damit negativ die Manövrierbarkeit des Roboters. Viele wissenschaftliche Arbeiten beschäftigen sich mit der Lösung dieses Problems durch Feature-Tracking. Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU. Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann. Ebenso präsentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln. Unser Konzept erlaubt die Neigung der Propellerblätter unabhängig von der Ausrichtung des Roboters mit Hilfe zweier zusätzlicher Aktuatoren. Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestätigen unsere Überlegungen und heben die Verbesserung der Manövrierfähigkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup. Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware. Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors
    • …
    corecore