1,184 research outputs found

    Evaluation of machine vision techniques for use within flight control systems

    Get PDF
    In this thesis, two of the main technical limitations for a massive deployment of Unmanned Aerial Vehicle (UAV) have been considered.;The Aerial Refueling problem is analyzed in the first section. A solution based on the integration of \u27conventional\u27 GPS/INS and Machine Vision sensor is proposed with the purpose of measuring the relative distance between a refueling tanker and UAV. In this effort, comparisons between Point Matching (PM) algorithms and Pose Estimation (PE) algorithms have been developed in order to improve the performance of the Machine Vision sensor. A method of integration based on Extended Kalman Filter (EKF) between GPS/INS and Machine Vision system is also developed with the goal of reducing the tracking error in the \u27pre-contact\u27 to contact and refueling phases.;In the second section of the thesis the issue of Collision Identification (CI) is addressed. A proposed solution consists on the use of Optical Flow (OF) algorithms for the detection of possible collisions in the range of vision of a single camera. The effort includes a study of the performance of different Optical Flow algorithms in different scenarios as well as a method to compute the ideal optical flow with the aim of evaluating the algorithms. An analysis on the suitability for a future real time implementation is also performed for all the analyzed algorithms.;Results of the tests show that the Machine Vision technology can be used to improve the performance in the Aerial Refueling problem. In the Collision Identification problem, the Machine Vision has to be integrated with standard sensors in order to be used inside the Flight Control System

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Event-Based Visual-Inertial Odometry on a Fixed-Wing Unmanned Aerial Vehicle

    Get PDF
    Event-based cameras are a new type of visual sensor that operate under a unique paradigm. These cameras provide asynchronous data on the log-level changes in light intensity for individual pixels, independent of other pixels\u27 measurements. Through the hardware-level approach to change detection, these cameras can achieve microsecond fidelity, millisecond latency, ultra-wide dynamic range, and all with very low power requirements. The advantages provided by event-based cameras make them excellent candidates for visual odometry (VO) for unmanned aerial vehicle (UAV) navigation. This document presents the research and implementation of an event-based visual inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-of-freedom (DOF) motion and pose utilizing an affixed event-based camera with an integrated Micro-Electro-Mechanical Systems (MEMS) inertial measurement unit (IMU). The front-end of the EVIO pipeline uses the current motion estimate of the pipeline to generate motion-compensated frames from the asynchronous event camera data. These frames are fed the back-end of the pipeline, which uses a Multi-State Constrained Kalman Filter (MSCKF) [1] implemented with Scorpion, a Bayesian state estimation framework developed by the Autonomy and Navigation Technology (ANT) Center at Air Force Institute of Technology (AFIT) [2]. This EVIO pipeline was tested on selections from the benchmark Event Camera Dataset [3]; and on a dataset collected, as part of this research, during the ANT Center\u27s first flight test with an event-based camera

    Motion Tracking and Potentially Dangerous Situations Recognition in Complex Environment

    Get PDF
    In recent years, video surveillance systems have been playing a significantly important role in the human safety and security field by monitoring public or private areas. In this chapter, we have discussed the development of an intelligent surveillance system to detect, track and identify potentially hazardous events that may occur at level crossings (LC). This system starts by detecting and tracking objects on the level crossing. Then, a danger evaluation method is built using hidden Markov model in order to predict trajectories of the detected objects. The trajectories are analyzed with a credibility model to evaluate dangerous situations at level crossings. Synthetics and real data are used to test the effectiveness and the robustness of the proposed algorithms and the whole approach by considering various scenarios within several situations

    Fusion of Imaging and Inertial Sensors for Navigation

    Get PDF
    The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions

    Framework for extracting and solving combination puzzles

    Get PDF
    Selles töös uuritakse, kuidas arvuti nĂ€gemisega seotud algoritme on vĂ”imalik rakendada objektide tuvastuse probleemile. TĂ€psemalt, kas arvuti nĂ€gemist on vĂ”imalik kasutada pĂ€ris maailma kombinatoorsete probleemide lahendamiseks. Idee kasutada arvuti rakendust probleemide lahendamiseks, tulenes tĂ€helepanekust, et probleemide lahenduse protsessid on kĂ”ik enamasti algoritmid. Sellest vĂ”ib jĂ€reldada, et arvutid sobivad algoritmiliste probleemide lahendamiseks paremini kui inimesed, kellel vĂ”ib sama ĂŒlesande peale kuluda kordades kauem. Siiski ei vaatle arvutid probleeme samamoodi nagu inimesed ehk nad ei saa probleeme analĂŒĂŒsida. Niisiis selle töö panuseks saab olema erinevate arvuti nĂ€gemise algoritmide uurimine, mille eesmĂ€rgiks on pĂ€ris maailma kombinatoorsete probleemide tĂ”lgendamine abstraktseteks struktuurideks, mida arvuti on vĂ”imeline mĂ”istma ning lahendama.Praegu on antud valdkonnas vĂ€he materiali, mis annab hea vĂ”imaluse panustada sellesse valdkonda. Seda saavutatakse lĂ€bi empiirilise uurimise testide kogumiku kujul selleks, et veenduda millised lĂ€henemised on kĂ”ige paremad. Nende eesmĂ€rkide saavutamiseks töötati lĂ€bi suur hulk arvuti nĂ€gemisega seotud materjale ning teooriat. Lisaks vĂ”eti ka arvesse reaalaja toimingute tĂ€htsus, mida vĂ”ib nĂ€ha erinevate liikumisest struktuuri eraldavate algoritmide(SLAM, PTAM) Ă”pingutest, mida hiljem edukalt kasutati navigatsiooni ja liitreaalsuse probleemide lahendamiseks. Siiski tuleb mainida, et neid algoritme ei kasutatud objektide omaduste tuvastamiseks.See töö uurib, kuidas saab erinevaid lĂ€henemisi kasutada selleks, et aidata vĂ€hekogenud kasutajaid kombinatoorsete pĂ€ris maailma probleemide lahendamisel. Lisaks tekib selle töö tulemusena vĂ”imalus tuvastada objektide liikumist (translatsioon, pöörlemine), mida saab kasutada koos virutaalse probleemi mudeliga, et parandada kasutaja kogemust.This thesis describes and investigates how computer vision algorithms and stereo vision algorithms may be applied to the problem of object detection. In particular, if computer vision can aid on puzzle solving. The idea to use computer application for puzzle solving came from the fact that all solution techniques are algorithms in the end. This fact leads to the conclusion that algorithms are well solved by machines, for instance, a machine requires milliseconds to compute the solution while a human can handle this in minutes or hours. Unfortunately, machines cannot see puzzles from human perspective thus cannot analyze them. Hence, the contribution of this thesis is to study different computer vision approaches from non-related solutions applied to the problem of translating the physical puzzle model into the abstract structure that can be understood and solved by a machine.Currently, there is a little written on this subject, therefore, there is a great chance to contribute. This is achieved through empirical research represented as a set of experiments in order to ensure which approaches are suitable. To accomplish these goals huge amount of computer vision theory has been studied. In addition, the relevance of real-time operations was taken into account. This was manifested through the Different real-time Structure from Motion algorithms (SLAM, PTAM) studies that were successfully applied for navigation or augmented reality problems; however, none of them for object characteristics extraction.This thesis examines how these different approaches can be applied to the given problem to help inexperienced users solve the combination puzzles. Moreover, it produces a side effect which is a possibility to track objects movement (rotation, translation) that can be used for manipulating a rendered game puzzle and increase interactivity and engagement of the user

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment

    Space Image Processing and Orbit Estimation Using Small Aperture Optical Systems

    Get PDF
    Angles-only initial orbit determination (AIOD) methods have been used to find the orbit of satellites since the beginning of the Space Race. Given the ever increasing number of objects in orbit today, the need for accurate space situational awareness (SSA) data has never been greater. Small aperture (\u3c 0:5m) optical systems, increasingly popular in both amateur and professional circles, provide an inexpensive source of such data. However, utilizing these types of systems requires understanding their limits. This research uses a combination of image processing techniques and orbit estimation algorithms to evaluate the limits and improve the resulting orbit solution obtained using small aperture systems. Characterization of noise from physical, electronic, and digital sources leads to a better understanding of reducing noise in the images used to provide the best solution possible. Given multiple measurements, choosing the best images for use is a non-trivial process and often results in trying all combinations. In an effort to help autonomize the process, a novel “observability metric” using only information from the captured images was shown empirically as a method of choosing the best observations. A method of identifying resident space objects (RSOs) in a single image using a gradient based search algorithm was developed and tested on actual space imagery captured with a small aperture optical system. The algorithm was shown to correctly identify candidate RSOs in a variety of observational scenarios

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)
    • 

    corecore