479 research outputs found

    Deep-sea image processing

    Get PDF
    High-resolution seafloor mapping often requires optical methods of sensing, to confirm interpretations made from sonar data. Optical digital imagery of seafloor sites can now provide very high resolution and also provides additional cues, such as color information for sediments, biota and divers rock types. During the cruise AT11-7 of the Woods Hole Oceanographic Institution (WHOI) vessel R/V Atlantis (February 2004, East Pacific Rise) visual imagery was acquired from three sources: (1) a digital still down-looking camera mounted on the submersible Alvin, (2) observer-operated 1-and 3-chip video cameras with tilt and pan capabilities mounted on the front of Alvin, and (3) a digital still camera on the WHOI TowCam (Fornari, 2003). Imagery from the first source collected on a previous cruise (AT7-13) to the Galapagos Rift at 86°W was successfully processed and mosaicked post-cruise, resulting in a single image covering area of about 2000 sq.m, with the resolution of 3 mm per pixel (Rzhanov et al., 2003). This paper addresses the issues of the optimal acquisition of visual imagery in deep-seaconditions, and requirements for on-board processing. Shipboard processing of digital imagery allows for reviewing collected imagery immediately after the dive, evaluating its importance and optimizing acquisition parameters, and augmenting acquisition of data over specific sites on subsequent dives.Images from the deepsea power and light (DSPL) digital camera offer the best resolution (3.3 Mega pixels) and are taken at an interval of 10 seconds (determined by the strobe\u27s recharge rate). This makes images suitable for mosaicking only when Alvin moves slowly (≪1/4 kt), which is not always possible for time-critical missions. Video cameras provided a source of imagery more suitable for mosaicking, despite its inferiority in resolution. We discuss required pre-processing and imageenhancement techniques and their influence on the interpretation of mosaic content. An algorithm for determination of camera tilt parameters from acquired imagery is proposed and robustness conditions are discussed

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit

    Spacecraft Position Estimation and Attitude Determination using Terrestrial Illumination Matching

    Get PDF
    An algorithm to conduct spacecraft position estimation and attitude determination via terrestrial illumination matching (TIM) is presented consisting of a novel method that uses terrestrial lights as a surrogate for star fields. Although star sensors represent a highly accurate means of attitude determination with considerable spaceflight heritage, with Global Positioning System (GPS) providing position, TIM provides a potentially viable alternative in the event of star sensor or GPS malfunction or performance degradation. The research defines a catalog of terrestrial light constellations, which are then implemented within the TIM algorithm for position acquisition of a generic spacecraft bus. With the algorithm relying on terrestrial lights rather than the established standard of star fields, a series of sensitivity studies are showcased to determine performance during specified operating constraints, to include varying orbital altitude and cloud cover conditions. The pose is recovered from the matching techniques by solving the epipolar constraint equation using the Essential and Fundamental matrix, and point-to-point projection using the Homography matrix. This is used to obtain relative position change and the spacecraft\u27s attitude when there is a measurement. When there is not, both an extended and an unscented Kalman filter are applied to test continuous operation between measurements. The research is operationally promising for use with each nighttime pass, but filtering is not enough to sustain orbit determination during daytime operations

    Event-Based Visual-Inertial Odometry Using Smart Features

    Get PDF
    Event-based cameras are a novel type of visual sensor that operate under a unique paradigm, providing asynchronous data on the log-level changes in light intensity for individual pixels. This hardware-level approach to change detection allows these cameras to achieve ultra-wide dynamic range and high temporal resolution. Furthermore, the advent of convolutional neural networks (CNNs) has led to state-of-the-art navigation solutions that now rival or even surpass human engineered algorithms. The advantages offered by event cameras and CNNs make them excellent tools for visual odometry (VO). This document presents the implementation of a CNN trained to detect and describe features within an image as well as the implementation of an event-based visual-inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-offreedom (DOF) pose using an affixed event-based camera with an integrated inertial measurement unit (IMU). The front-end of this pipeline utilizes a neural network for generating image frames from asynchronous event camera data. These frames are fed into a multi-state constraint Kalman filter (MSCKF) back-end that uses the output of the developed CNN to perform measurement updates. The EVIO pipeline was tested on a selection from the Event-Camera Dataset [1], and on a dataset collected from a fixed-wing unmanned aerial vehicle (UAV) flight test conducted by the Autonomy and Navigation Technology (ANT) Center

    An Intelligent Portable Aerial Surveillance System: Modeling and Image Stitching

    Get PDF
    Unmanned Aerial Vehicles (UAVs) have been widely used in modern warfare for surveillance, reconnaissance and even attack missions. They can provide valuable battlefield information and accomplish dangerous tasks with minimal risk of loss of lives and personal injuries. However, existing UAV systems are far from perfect to meet all possible situations. One of the most notable situations is the support for individual troops. Besides the incapability to always provide images in desired resolution, currently available systems are either too expensive for large-scale deployment or too heavy and complex for a single solder. Intelligent Portable Aerial Surveillance System (IPASS), sponsored by the Air Force Research Laboratory (AFRL), is aimed at developing a low-cost, light-weight unmanned aerial vehicle that can provide sufficient battlefield intelligence for individual troops. The main contributions of this thesis are two-fold (1) the development and verification of a model-based flight simulation for the aircraft, (2) comparison of image stitching techniques to provide a comprehensive aerial surveillance information from multiple vision. To assist with the design and control of the aircraft, dynamical models are established at different complexity levels. Simulations with these models are implemented in Matlab to study the dynamical characteristics of the aircraft. Aerial images acquired from the three onboard cameras are processed after getting the flying platform built. How a particular image is formed from a camera and the general pipeline of the feature-based image stitching method are first introduced in the thesis. To better satisfy the needs of this application, a homography-based stitching method is studied. This method can greatly reduce computation time with very little compromise in the quality of the panorama, which makes real-time video display of the surroundings on the ground station possible. By implementing both of the methods for image stitching using OpenCV, a quantitative comparison in the performance is accomplished
    • …
    corecore