75 research outputs found

    Vision-based Safe Autonomous UAV Docking with Panoramic Sensors

    Full text link
    The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked concerns about safety measures during their missions. To advance towards safer autonomous aerial robots, this work presents a vision-based solution to ensuring safe autonomous UAV landings with minimal infrastructure. During docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper, we propose the use of a single omnidirectional panoramic camera pointing upwards from a landing pad to detect and estimate the position of people around the landing area. The images are processed in real-time in an embedded computer, which communicates with the onboard computer of approaching UAVs to transition between landing, hovering or emergency landing states. While landing, the ground camera also aids in finding an optimal position, which can be required in case of low-battery or when hovering is no longer possible. We use a YOLOv7-based object detection model and a XGBooxt model for localizing nearby people, and the open-source ROS and PX4 frameworks for communication, interfacing, and control of the UAV. We present both simulation and real-world indoor experimental results to show the efficiency of our methods

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Odometria visual monocular em robôs para a agricultura com camara(s) com lentes "olho de peixe"

    Get PDF
    One of the main challenges in robotics is to develop accurate localization methods that achieve acceptable runtime performances.One of the most common approaches is to use Global Navigation Satellite System such as GPS to localize robots.However, satellite signals are not full-time available in some kind of environments.The purpose of this dissertation is to develop a localization system for a ground robot.This robot is inserted in a project called RoMoVi and is intended to perform tasks like crop monitoring and harvesting in steep slope vineyards.This vineyards are localized in the Douro region which are characterized by the presence of high hills.Thus, the context of RoMoVi is not prosperous for the use of GPS-based localization systems.Therefore, the main goal of this work is to create a reliable localization system based on vision techniques and low cost sensors.To do so, a Visual Odometry system will be used.The concept of Visual Odometry is equivalent to wheel odometry but it has the advantage of not suffering from wheel slip which is present in these kind of environments due to the harsh terrain conditions.Here, motion is tracked computing the homogeneous transformation between camera frames, incrementally.However, this approach also presents some open issues.Most of the state of art methods, specially those who present a monocular camera system, don't perform good motion estimations in pure rotations.In some of them, motion even degenerates in these situations.Also, computing the motion scale is a difficult task that is widely investigated in this field.This work is intended to solve these issues.To do so, fisheye lens cameras will be used in order to achieve wide vision field of views

    Vision-based safe autonomous UAV landing with panoramic sensors

    Get PDF
    The remarkable growth of unmanned aerial vehicles (UAVs) has also raised concerns about safety measures during their missions. To advance towards safer autonomous aerial robots, this thesis strives to develop a safe autonomous UAV landing solution, a vital part of every UAV operation. The project proposes a vision-based framework for monitoring the landing area by leveraging the omnidirectional view of a single panoramic camera pointing upwards to detect and localize any person within the landing zone. Then, it sends this information to approaching UAVs to either hover and wait or adaptively search for a more optimal position to land themselves. We utilize and fine-tune the YOLOv7 object detection model, an XGBooxt model for localizing nearby people, and the open-source ROS and PX4 frameworks for communications and drone control. We present both simulation and real-world indoor experimental results to demonstrate the capability of our methods

    Heading Estimation via Sun Sensing for Autonomous Navigation

    Get PDF
    In preparation for the mission to Mars in 2020, NASA JPL and Caltech have been exploring the potential of sending a scout robot to accompany the new rover. One of the leading candidates for this scout robot is a lightweight helicopter that can fly every day for ~1 to 3 minutes. Its findings would be critical in the path planning for the rover because of its ability to see over and round local terrain elements. The inconsistent Mars magnetic field and GPS-denied environment would require the navigation system of such a vehicle to be completely overhauled. In this thesis, we present a novel technique for heading estimation for autonomous vehicles using sun sensing via fisheye camera. The approach results in accurate heading estimates within 2.4° when relying on the camera alone. If the information from the camera is fused with our sensors, the heading estimates are even more accurate. While this does not yet meet the desired error bound, it is a start with the critical flaws in the algorithm already identified in order to improve performance significantly. This lightweight solution however shows promise and does meet the weight constraints for the 1 kg Mars 2020 Helicopter Scout.</p

    Augmented Reality in Forest Machine Cabin

    Get PDF
    Augmented reality human machine interface is demonstrated in the cabin of a forest machine outdoors for the first time in real time. In this work, we propose a system setup and a real-time capable algorithm to augment the operator’s visual field with measurements from the forest machine and its environment. In the demonstration, an instrumented forestry crane and a lidar are used to model the pose of the crane and its surroundings. In our approach, a camera and an inertial measurement unit are used to estimate the pose of the operator’s head in difficult lighting conditions with the help of planar markers placed on the cabin structures. Using the estimate, a point cloud and a crane model are superimposed on the video feed to form an augmented reality view. Our system is tested to work outdoors using a forest machine research platform in real time with encouraging initial results.Peer reviewe

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    Lisätyn todellisuuden käyttöliittymä puoliautonomisiin työkoneisiin

    Get PDF
    Forest machines are being automated today. However, the challenging environment and complexity of the work makes the task difficult. A forest machine operator needs easily interpretable input from the machine in order to supervise and control it. Hence, a device that would show the digital information as a part of the real environment is desired. The goal of the thesis is to implement a real time augmented reality display for forest machines. The main task is to estimate the pose of the user’s head because the virtual data should be aligned with real objects. Also, the digital content and how it is visualized has to be considered. A machine vision camera and inertial measurements are used in the pose estimation. Visual markers are utilized to get pose estimate of the camera. And, orientation from inertial measurements is estimated using an extended Kalman filter. To get the final estimate, the orientations of the two devices are sensor fused. Furthermore, the virtual data comes mainly from an on-board lidar. A 3D point cloud and a wire frame model of a forestry crane are augmented to a live video on a PC. The implemented system proved to work outdoors with actual hardware in real time. Although there are some identifiable errors in the pose estimate, the initial results are encouraging. Further improvements should be targeted to the accuracy of marker detection and to the development of a comprehensive sensor fusion algorithm.Haastava ympäristö ja monimutkaiset työtehtävät tekevät metsäkoneiden toimintojen automatisoimisesta vaikeaa. Olisikin toivottavaa, että metsäkoneenkuljettaja pystyisi tulkitsemaan koneelta tulevaa tietoa helposti ja nopeasti. Ratkaisuksi ehdotetaan järjestelmää, joka sulauttaa digitaalisen tiedon osaksi käyttöympäristöä. Tämä mahdollistaisi puoliautonomisen työkoneen sujuvamman valvomisen ja ohjaamisen. Tämän työn tavoitteena on toteuttaa lisätyn todellisuuden näyttö metsäkoneisiin. Tärkeimpänä tehtävänä on estimoida käyttäjän pään sijainti ja asento, sillä digitaalisen datan pitäisi limittyä todellisuuden kanssa. Lisäksi on pohdittava virtuaalisen tiedon sisältö, ja kuinka se esitetään käyttäjälle. Asennon ja paikan mittaamiseen käytetään päähän kiinnitettyä konenäkökameraa ja inertiamittausyksikköä. Kameralla tunnistetaan työkoneen hyttiin sijoitettuja tunnistemerkkejä, joilla sekä kameran paikkaa että asentoa voidaan estimoida. Asentoestimaattia korjataan vielä inertiamittauksilla anturifuusiota hyödyntäen. Virtuaalinen tieto näytölle tulee pääasiassa laserkeilaimelta ja se lisätään tietokoneen ruudulla näkyvään videoon kolmiulotteisena pistepilvenä. Myös metsäkoneen puomi ja työkalu esitetään virtuaalisena mallina. Toteutettu järjestelmä osoittautui toimimaan oikealla laitteistolla ulkoilmassa tehdyssä kokeessa. Alustavat tulokset ovat rohkaisevia, mutta myös paikan ja asennon virheitä havaittiin ja identifioitiin. Tulevaisuuden kehityskohteita ovat tunnisteiden paikan tarkempi mittaaminen ja kokonaisvaltaisemman anturifuusion kehittäminen
    corecore