170 research outputs found

    Vision - based self - guided Quadcopter landing on moving platform during fault detection

    Get PDF
    Fault occurrence in the quadcopter is very common during operation in the air. This paper presents a real-time implementation to detect the fault and then the system is guaranteeing to safely land on the surface, even the moving landing platform. Primarily, PixHawk auto-pilot was used to verify in real-time, with platform detection and various environmental conditions. The method is ensuring the quadcopter operates in the landing area zone with the help of a GPS feature. Then the precise landing on the astable-landing platform is calibrated automatically using the vision-based learning feedback technique. The proposed objective is developed using reconfigurable Raspberry Pi-3 with a Pi camera. The full decision on an efficient landing algorithm is deployed into the quadcopter. The system is self-guided and automatically returns to home-based whenever the fault detects. The study is conducted with the situation of low battery operation and the trigger of auto-pilot helps to land the device safely before any mal-function. The system is featured with predetermined speed and altitude while navigating the home base, thus improves the detection process. Finally, the experiment study provided successful trials to track usable platform, landing on a restricted area, and disarm the motors autonomously

    The Detection System of Helipad for Unmanned Aerial Vehicle Landing Using YOLO Algorithm

    Get PDF
    The challenge with using the Unmanned Aerial Vehicle (UAV) is when the UAV makes a landing. This problem can be overcome by developing a landing vision through helipad detection. This helipad detection can make it easier for UAVs to land accurately and precisely by detecting the helipad using a camera. Furthermore, image processing technology is used on the image produced by the camera. You Only Look Once (YOLO) is an image processing algorithm developed to detect objects in real-time, and it is the result of the development of one of the Convolutional Neural Network (CNN) algorithm methods. Therefore, in this study the YOLO method was used to detect a helipad in real-time. The models used in the YOLO algorithm were Mean-Shift and Tiny YOLO VOC. The Tiny YOLO VOC model performed better than the Mean-Shift method in detecting helipads. The test results obtained a confidence value of 91.1%, and the system processing speed reached 35 frames per second (fps) in bright conditions and 37 fps in dark conditions at an altitude of up to 20 meters

    Carnegie Mellon Team Tartan: Mission-level Robustness with Rapidly Deployed Autonomous Aerial Vehicles in the MBZIRC 2020

    Full text link
    For robotics systems to be used in high risk, real-world situations, they have to be quickly deployable and robust to environmental changes, under-performing hardware, and mission subtask failures. Robots are often designed to consider a single sequence of mission events, with complex algorithms lowering individual subtask failure rates under some critical constraints. Our approach is to leverage common techniques in vision and control and encode robustness into mission structure through outcome monitoring and recovery strategies, aided by a system infrastructure that allows for quick mission deployments under tight time constraints and no central communication. We also detail lessons in rapid field robotics development and testing. Systems were developed and evaluated through real-robot experiments at an outdoor test site in Pittsburgh, Pennsylvania, USA, as well as in the 2020 Mohamed Bin Zayed International Robotics Challenge. All competition trials were completed in fully autonomous mode without RTK-GPS. Our system led to 4th place in Challenge 2 and 7th place in the Grand Challenge, and achievements like popping five balloons (Challenge 1), successfully picking and placing a block (Challenge 2), and dispensing the most water autonomously with a UAV of all teams onto an outdoor, real fire (Challenge 3).Comment: 28 pages, 26 figures. To appear in Field Robotics, Special Issues on MBZIRC 202

    Aggressive landing maneuvers for UAVs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (p. 69-70).VTOL (Vertical Take Off and Landing) vehicle landing is considered to be a critically difficult task for both land, marine, and urban operations. This thesis describes one possible control approach to enable landing of unmanned aircraft systems at all attitudes, including against walls and ceilings as a way to considerably enhance the operational capability of these vehicles. The features of the research include a novel approach to trajectory tracking, whereby the primary system outputs to be tracked are smoothly scheduled according to the state of the vehicle relative to its landing area. The proposed approach is illustrated with several experiments using a low-cost three-degree-of-freedom helicopter. We also include the design details of a testbed for the demonstration of the application of our research endeavor. The testbed is a model helicopter UAV platform that has indoor and outdoor aggressive flight capability.by Selcuk Bayraktar.S.M

    The GRIFFIN perception dataset: Bridging the gap between flapping-wing flight and robotic perception

    Get PDF
    The development of automatic perception systems and techniques for bio-inspired flapping-wing robots is severely hampered by the high technical complexity of these platforms and the installation of onboard sensors and electronics. Besides, flapping-wing robot perception suffers from high vibration levels and abrupt movements during flight, which cause motion blur and strong changes in lighting conditions. This letter presents a perception dataset for bird-scale flapping-wing robots as a tool to help alleviate the aforementioned problems. The presented data include measurements from onboard sensors widely used in aerial robotics and suitable to deal with the perception challenges of flapping-wing robots, such as an event camera, a conventional camera, and two Inertial Measurement Units (IMUs), as well as ground truth measurements from a laser tracker or a motion capture system. A total of 21 datasets of different types of flights were collected in three different scenarios (one indoor and two outdoor). To the best of the authors' knowledge this is the first dataset for flapping-wing robot perceptionConsejo Europeo de Investigación 788247ARM-EXTEND DPI2017-8979-

    A Manipulator-Assisted Multiple UAV Landing System for USV Subject to Disturbance

    Full text link
    Marine waves significantly disturb the unmanned surface vehicle (USV) motion. An unmanned aerial vehicle (UAV) can hardly land on a USV that undergoes irregular motion. An oversized landing platform is usually necessary to guarantee the landing safety, which limits the number of UAVs that can be carried. We propose a landing system assisted by tether and robot manipulation. The system can land multiple UAVs without increasing the USV's size. An MPC controller stabilizes the end-effector and tracks the UAVs, and an adaptive estimator addresses the disturbance caused by the base motion. The working strategy of the system is designed to plan the motion of each device. We have validated the manipulator controller through simulations and well-controlled indoor experiments. During the field tests, the proposed system caught and placed the UAVs when the disturbed USV roll range was approximately 12 degrees

    Robots in Agriculture: State of Art and Practical Experiences

    Get PDF
    The presence of robots in agriculture has grown significantly in recent years, overcoming some of the challenges and complications of this field. This chapter aims to collect a complete and recent state of the art about the application of robots in agriculture. The work addresses this topic from two perspectives. On the one hand, it involves the disciplines that lead the automation of agriculture, such as precision agriculture and greenhouse farming, and collects the proposals for automatizing tasks like planting and harvesting, environmental monitoring and crop inspection and treatment. On the other hand, it compiles and analyses the robots that are proposed to accomplish these tasks: e.g. manipulators, ground vehicles and aerial robots. Additionally, the chapter reports with more detail some practical experiences about the application of robot teams to crop inspection and treatment in outdoor agriculture, as well as to environmental monitoring in greenhouse farming

    Enabling technologies for precise aerial manufacturing with unmanned aerial vehicles

    Get PDF
    The construction industry is currently experiencing a revolution with automation techniques such as additive manufacturing and robot-enabled construction. Additive Manufacturing (AM) is a key technology that can o er productivity improvement in the construction industry by means of o -site prefabrication and on-site construction with automated systems. The key bene t is that building elements can be fabricated with less materials and higher design freedom compared to traditional manual methods. O -site prefabrication with AM has been investigated for some time already, but it has limitations in terms of logistical issues of components transportation and due to its lack of design exibility on-site. On-site construction with automated systems, such as static gantry systems and mobile ground robots performing AM tasks, can o er additional bene ts over o -site prefabrication, but it needs further research before it will become practical and economical. Ground-based automated construction systems also have the limitation that they cannot extend the construction envelope beyond their physical size. The solution of using aerial robots to liberate the process from the constrained construction envelope has been suggested, albeit with technological challenges including precision of operation, uncertainty in environmental interaction and energy e ciency. This thesis investigates methods of precise manufacturing with aerial robots. In particular, this work focuses on stabilisation mechanisms and origami-based structural elements that allow aerial robots to operate in challenging environments. An integrated aerial self-aligning delta manipulator has been utilised to increase the positioning accuracy of the aerial robots, and a Material Extrusion (ME) process has been developed for Aerial Additive Manufacturing (AAM). A 28-layer tower has been additively manufactured by aerial robots to demonstrate the feasibility of AAM. Rotorigami and a bioinspired landing mechanism demonstrate their abilities to overcome uncertainty in environmental interaction with impact protection capabilities and improved robustness for UAV. Design principles using tensile anchoring methods have been explored, enabling low-power operation and explores possibility of low-power aerial stabilisation. The results demonstrate that precise aerial manufacturing needs to consider not only just the robotic aspects, such as ight control algorithms and mechatronics, but also material behaviour and environmental interaction as factors for its success.Open Acces

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future
    corecore