12,055 research outputs found

    Airborne vision-based attitude estimation and localisation

    Get PDF
    Vision plays an integral part in a pilot's ability to navigate and control an aircraft. Therefore Visual Flight Rules have been developed around the pilot's ability to see the environment outside of the cockpit in order to control the attitude of the aircraft, to navigate and to avoid obstacles. The automation of these processes using a vision system could greatly increase the reliability and autonomy of unmanned aircraft and flight automation systems. This thesis investigates the development and implementation of a robust vision system which fuses inertial information with visual information in a probabilistic framework with the aim of aircraft navigation. The horizon appearance is a strong visual indicator of the attitude of the aircraft. This leads to the first research area of this thesis, visual horizon attitude determination. An image processing method was developed to provide high performance horizon detection and extraction from camera imagery. A number of horizon models were developed to link the detected horizon to the attitude of the aircraft with varying degrees of accuracy. The second area investigated in this thesis was visual localisation of the aircraft. A terrain-aided horizon model was developed to estimate the position, altitude as well as attitude of the aircraft. This gives rough positions estimates with highly accurate attitude information. The visual localisation accuracy was improved by incorporating ground feature-based map-aided navigation. Road intersections were detected using a developed image processing algorithm and then they were matched to a database to provide positional information. The developed vision system show comparable performance to other non-vision-based systems while removing the dependence on external systems for navigation. The vision system and techniques developed in this thesis helps to increase the autonomy of unmanned aircraft and flight automation systems for manned flight

    LOFAR Sparse Image Reconstruction

    Get PDF
    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods Aims. Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework Methods. We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions. Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKAComment: Published in A&A, 19 pages, 9 figure

    Adaptative road lanes detection and classification

    Get PDF
    Proceeding of: 8th International Conference, ACIVS 2006, Antwerp, Belgium, September 18-21, 2006This paper presents a Road Detection and Classification algorithm for Driver Assistance Systems (DAS), which tracks several road lanes and identifies the type of lane boundaries. The algorithm uses an edge filter to extract the longitudinal road markings to which a straight lane model is fitted. Next, the type of right and left lane boundaries (continuous, broken or merge line) is identified using a Fourier analysis. Adjacent lanes are searched when broken or merge lines are detected. Although the knowledge of the line type is essential for a robust DAS, it has been seldom considered in previous works. This knowledge helps to guide the search for other lanes, and it is the basis to identify the type of road (one-way, two-way or freeway), as well as to tell the difference between allowed and forbidden maneuvers, such as crossing a continuous line.Publicad

    Sensitivity Analysis of an Automated Calibration Routine for Airborne Cameras

    Get PDF
    Given a known aircraft location, a set of camera calibration parameters can be used to correlate features in an image with ground locations. Previously, these calibration parameters were obtained during preflight with a lengthy calibration process. A method to automate this calibration using images taken with an aircraft mounted camera and position and attitude data is developed. This thesis seeks to determine a partial set of circumstances that affect the accuracy of the calibration results through simulation and experimental flight test. A software simulator is developed in which to test an array of aircraft maneuvers, camera orientations, and noise injection. Results from the simulation are used to prepare test points for an experiment flight test conducted to validate the calibration algorithm and the simulator. Real world flight test methodology and results are discussed. Images of the ground along with precise aircraft navigation and time data were gathered and processed for several representative aircraft maneuvers using two camera orientations

    Convolutional neural networks: a magic bullet for gravitational-wave detection?

    Get PDF
    In the last few years, machine learning techniques, in particular convolutional neural networks, have been investigated as a method to replace or complement traditional matched filtering techniques that are used to detect the gravitational-wave signature of merging black holes. However, to date, these methods have not yet been successfully applied to the analysis of long stretches of data recorded by the Advanced LIGO and Virgo gravitational-wave observatories. In this work, we critically examine the use of convolutional neural networks as a tool to search for merging black holes. We identify the strengths and limitations of this approach, highlight some common pitfalls in translating between machine learning and gravitational-wave astronomy, and discuss the interdisciplinary challenges. In particular, we explain in detail why convolutional neural networks alone cannot be used to claim a statistically significant gravitational-wave detection. However, we demonstrate how they can still be used to rapidly flag the times of potential signals in the data for a more detailed follow-up. Our convolutional neural network architecture as well as the proposed performance metrics are better suited for this task than a standard binary classifications scheme. A detailed evaluation of our approach on Advanced LIGO data demonstrates the potential of such systems as trigger generators. Finally, we sound a note of caution by constructing adversarial examples, which showcase interesting "failure modes" of our model, where inputs with no visible resemblance to real gravitational-wave signals are identified as such by the network with high confidence.Comment: First two authors contributed equally; appeared at Phys. Rev.
    corecore