23 research outputs found

    Visual servoing of an autonomous helicopter in urban areas using feature tracking

    Get PDF
    We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions that show the feasibility and robustness of our approach

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit

    Real-time landing place assessment in man-made environments

    Get PDF
    We propose a novel approach to the real-time landing site detection and assessment in unconstrained man-made environments using passive sensors. Because this task must be performed in a few seconds or less, existing methods are often limited to simple local intensity and edge variation cues. By contrast, we show how to efficiently take into account the potential sites' global shape, which is a critical cue in man-made scenes. Our method relies on a new segmentation algorithm and shape regularity measure to look for polygonal regions in video sequences. In this way, we enforce both temporal consistency and geometric regularity, resulting in very reliable and consistent detections. We demonstrate our approach for the detection of landable sites such as rural fields, building rooftops and runways from color and infrared monocular sequences significantly outperforming the state-of-the-art

    A Vision-Based Automatic Safe landing-Site Detection System

    Get PDF
    An automatic safe landing-site detection system is proposed for aircraft emergency landing, based on visible information acquired by aircraft-mounted cameras. Emergency landing is an unplanned event in response to emergency situations. If, as is unfortunately usually the case, there is no airstrip or airfield that can be reached by the un-powered aircraft, a crash landing or ditching has to be carried out. Identifying a safe landing-site is critical to the survival of passengers and crew. Conventionally, the pilot chooses the landing-site visually by looking at the terrain through the cockpit. The success of this vital decision greatly depends on the external environmental factors that can impair human vision, and on the pilot\u27s flight experience that can vary significantly among pilots. Therefore, we propose a robust, reliable and efficient detection system that is expected to alleviate the negative impact of these factors. In this study, we focus on the detection mechanism of the proposed system and assume that the image enhancement for increased visibility and image stitching for a larger field-of-view have already been performed on terrain images acquired by aircraft-mounted cameras. Specifically, we first propose a hierarchical elastic horizon detection algorithm to identify ground in rile image. Then the terrain image is divided into non-overlapping blocks which are clustered according to a roughness measure. Adjacent smooth blocks are merged to form potential landing-sites whose dimensions are measured with principal component analysis and geometric transformations. If the dimensions of a candidate region exceed the minimum requirement for safe landing, the potential landing-site is considered a safe candidate and highlighted on the human machine interface. At the end, the pilot makes the final decision by confirming one of the candidates, also considering other factors such as wind speed and wind direction, etc

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    Adaptive control of autonomous helicopters.

    Get PDF
    Chen, Yipin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 81-83).Abstracts in English and Chinese.Abstract --- p.1摘要 --- p.2Table of Contents --- p.3Acknowledgements --- p.4Nomenclature --- p.5List of Figures --- p.9Chapter 1 --- IntroductionChapter 1.1 --- Motivation and Literature Review --- p.11Chapter 1.2 --- Background --- p.13Chapter 1.3 --- Research Overview --- p.14Chapter 1.4 --- Thesis Outline --- p.15Chapter 2 --- Kinematic and Dynamic ModelingChapter 2.1 --- Helicopter Dynamics --- p.16Chapter 2.2 --- Kinematics of Point Feature Projection --- p.19Chapter 2.3 --- Kinematics of Line Feature Projection --- p.22Chapter 3 --- Adaptive Visual Servoing with Uncalibrated CameraChapter 3.1 --- On-line Parameter Estimation --- p.25Chapter 3.2 --- Controller Design --- p.28Chapter 3.3 --- Stability Analysis --- p.30Chapter 3.4 --- Simulation --- p.33Chapter 4 --- Adaptive Control with Unknown IMU PositionChapter 4.1 --- Control Strategies --- p.47Chapter 4.1.1 --- Dynamic Model with Rotor Dynamics --- p.47Chapter 4.1.2 --- p.50Chapter 4.2 --- Stability Analysis --- p.55Chapter 4.3 --- Simulation --- p.57Chapter 5 --- ConclusionsChapter 5.1 --- Summary --- p.64Chapter 5.2 --- Contributions --- p.65Chapter 5.3 --- Future Research --- p.65Chapter A --- Inertial Matrix of the Helicopter --- p.66Chapter B --- Induced Torque --- p.69Chapter C --- Unknown Parameter Vectors and Initial Estimation Values --- p.72Chapter D --- Cauchy Inequality --- p.74Chapter E --- Rotor Dynamics --- p.77Bibliography --- p.8

    Bio-Inspired Information Extraction In 3-D Environments Using Wide-Field Integration Of Optic Flow

    Get PDF
    A control theoretic framework is introduced to analyze an information extraction approach from patterns of optic flow based on analogues to wide-field motion-sensitive interneurons in the insect visuomotor system. An algebraic model of optic flow is developed, based on a parameterization of simple 3-D environments. It is shown that estimates of proximity and speed, relative to these environments, can be extracted using weighted summations of the instantaneous patterns of optic flow. Small perturbation techniques are utilized to link weighting patterns to outputs, which are applied as feedback to facilitate stability augmentation and perform local obstacle avoidance and terrain following. Weighting patterns that provide direct linear mappings between the sensor array and actuator commands can be derived by casting the problem as a combined static state estimation and linear feedback control problem. Additive noise and environment uncertainties are incorporated into an offline procedure for determination of optimal weighting patterns. Several applications of the method are provided, with differing spatial measurement domains. Non-linear stability analysis and experimental demonstration is presented for a wheeled robot measuring optic flow in a planar ring. Local stability analysis and simulation is used to show robustness over a range of urban-like environments for a fixed-wing UAV measuring in orthogonal rings and a micro helicopter measuring over the full spherical viewing arena. Finally, the framework is used to analyze insect tangential cells with respect to the information they encode and to demonstrate how cell outputs can be appropriately amplified and combined to generate motor commands to achieve reflexive navigation behavior
    corecore