3 research outputs found

    A Review of DJI’s Mavic Pro Precision Landing Accuracy

    Get PDF
    Precision landing has the potential to increase the accuracy of autonomous landings. Unique applications require specific landing performance; for example, wireless charging loses efficiency with a misalignment of 100mm. Unfortunately, there is no publicly available information for the DJI Mavic Pro’s landing specifications. This research investigated the ability of a Mavic Pro to land at a specified point accurately. The purpose of this research is to determine if precision landings are more accurate than non-precision autonomous landings and if the Mavic Pro is capable of applications such as wireless charging when using precision landings. A total of 128 (64 precision and 64 non-precision) landings were recorded. A two-tail two-sample t-test compared the differences between Precision Landing On vs. Precision Landing Off (PLON vs. PLOFF). Data showed statistical evidence to reject the null hypothesis indicating there was a statistical performance in mean landing accuracy with PLON (M = 3.45, SD = 1.30) over PLOFF (M = 4.40, SD = 1.89), t(109) = -3.313, p = 0.0013. A one-tail one-sample t-test comparing the landing distance of PLON to 100mm (distance for effective wireless charging) produced statistical evidence to reject the null hypothesis indicating the PLON landing accuracy (M = 87.63mm, SD = 33.02mm) was less than 100mm, t(62) = -2.98, p = 0.002. Evidence showed that precision landings increased the landing performance and may allow for future potential applications, including wireless charging

    Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle

    Get PDF
    In recent years the use of unmanned air systems (UAS) has seen extreme growth. These small, often inexpensive platforms have been used to aid in tasks such as search and rescue, medicinal deliveries, disaster relief and more. In many use cases UAS work alongside unmanned ground vehicles (UGVs) to complete autonomous tasks. For end-to-end autonomous cooperation, the UAS needs to be able to autonomously take off and land on the UGV. Current autonomous landing solutions often use fiducial markers to aid in localizing the UGV relative to the UAS, an external ground computer to aid in computation, or gimbaled cameras on-board the UAS. This thesis seeks to demonstrate a vision-based autonomous landing system that does not rely on the use of fiducial markers, completes all computations on-board the UAS, and uses a fixed, non-gimbaled camera. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 grams. The foundation of this thesis extends upon current efforts by localizing the UGV relative to the UAS using neural network object detection and camera intrinsic properties instead of common place fiducial markers. An object detection neural network is used to detect the UGV within an image captured by the camera on-board the UAS. Then a localization algorithm utilizes the UGV’s pixel position within the image to estimate the UGV’s position relative to the UAS. This estimated position of the UGV will be passed into a command generator that sends setpoints to the on-board PX4 flight control unit (FCU). This autonomous landing system was developed and validated within a high-fidelity simulation environment before conducting outdoor experiments

    TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION

    Get PDF
    The objective of this dissertation is to develop and demonstrate autonomous ship-board landing with computer vision. The problem is hard primarily due to the unpredictable stochastic nature of deck motion. The work involves a fundamental understanding of how vision works, what are needed to implement it, how it interacts with aircraft controls, the necessary and sufficient hardware, and software, how it differs from human vision, its limits, and finally the avenues of growth in the context of aircraft landing. The ship-deck motion dataset is provided by the U.S. Navy. This data is analyzed to gain fundamental understanding and is then used to replicate stochastic deck motion in a laboratory setting on a six degrees of freedom motion platform, also called Stewart platform. The method uses a shaping filter derived from the dataset to excite the platform. An autonomous quadrotor UAV aircraft is designed and fabricated for experimental testing of vision-based landing methods. The entire structure, avionics architecture, and flight controls for the aircraft are completely developed in-house. This provides the flexibility and fundamental understanding needed for this research. A fiducial-based vision system is first designed for detection and tracking of ship-deck. This is then utilized to design a tracking controller with the best possible bandwidth to track the deck with minimum error. Systematic experiments are conducted with static, sinusoidal, and stochastic motions to quantify the tracking performance. A feature-based vision system is designed next. Simple experiments are used to quantitatively and qualitatively evaluate the superior robustness of feature-based vision under various degraded visual conditions. This includes: (1) partial occlusion, (2) illumination variation, (3) glare, and (4) water distortion. The weight and power penalty for using feature-based vision are also determined. The results show that it is possible to autonomously land on ship-deck using computer vision alone. An autonomous aircraft can be constructed with only an IMU and a Visual Odometry software running on stereo camera. The aircraft then only needs a monocular, global shutter, high frame rate camera as an extra sensor to detect ship-deck and estimate its relative position. The relative velocity however needs to be derived using Kalman filter on the position signal. For the filter, knowledge of disturbance/motion spectrum is not needed, but a white noise disturbance model is sufficient. For control, a minimum bandwidth of 0.15 Hz is required. For vision, a fiducial is not needed. A feature-rich landing area is all that is required. The limits of the algorithm are set by occlusion(80\% tolerable), illumination (20,000 lux-0.01 lux), angle of landing (up to 45 degrees), 2D nature of features, and motion blur. Future research should extend the capability to 3D features and use of event-based cameras. Feature-based vision is more versatile and human-like than fiducial-based, but at the cost of 20 times higher computing power which is increasingly possible with modern processors. The goal is not an imitation of nature but derive inspiration from it and overcome its limitations. The feature-based landing opens a window towards emulating the best of human training and cognition, without its burden of latency, fatigue, and divided attention
    corecore