31,871 research outputs found

    Kalman filter based range estimation for autonomous navigation using imaging sensors

    Get PDF
    Rotorcraft operating in high-threat environments fly close to the surface of the earth to utilize surrounding terrain, vegetation, or man-made objects to minimize the risk of being detected by the enemy. Two basic requirements for obstacle avoidance are detection and range estimation of the object from the current rotorcraft position. There are many approaches to the estimation of range using a sequence of images. The approach used in this analysis differes from previous methods in two significant ways: an attempt is not made to estimate the rotorcraft's motion from the images; and the interest lies in recursive algorithms. The rotorcraft parameters are assumed to be computed using an onboard inertial navigation system. Given a sequence of images, using image-object differential equations, a Kalman filter (Sridhar and Phatak, 1988) can be used to estimate both the relative coordinates and the earth coordinates of the objects on the ground. The Kalman filter can also be used in a predictive mode to track features in the images, leading to a significant reduction of search effort in the feature extraction step of the algorithm. The purpose is to summarize early results obtained in extending the Kalman filter for use with actual image sequences. The experience gained from the application of this algorithm to real images is very valuable and is a necessary step before proceeding to the estimation of range during low-altitude curvilinear flight. A simple recursive method is presented to estimate range to objects using a sequence of images. The method produces good range estimates using real images in a laboratory set up and needs to be evaluated further using several different image sequences to test its robustness. The feature generation part of the algorithm requires further refinement on the strategies to limit the number of features (Sridhar and Phatak, 1989). The extension of the work reported here to curvilinear flight may require the use of the extended Kalman filter

    A model-based approach for detection of objects in low resolution passive millimeter wave images

    Get PDF
    A model-based vision system to assist the pilots in landing maneuvers under restricted visibility conditions is described. The system was designed to analyze image sequences obtained from a Passive Millimeter Wave (PMMW) imaging system mounted on the aircraft to delineate runways/taxiways, buildings, and other objects on or near runways. PMMW sensors have good response in a foggy atmosphere, but their spatial resolution is very low. However, additional data such as airport model and approximate position and orientation of aircraft are available. These data are exploited to guide our model-based system to locate objects in the low resolution image and generate warning signals to alert the pilots. Also analytical expressions were derived from the accuracy of the camera position estimate obtained by detecting the position of known objects in the image

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    Video Interpolation using Optical Flow and Laplacian Smoothness

    Full text link
    Non-rigid video interpolation is a common computer vision task. In this paper we present an optical flow approach which adopts a Laplacian Cotangent Mesh constraint to enhance the local smoothness. Similar to Li et al., our approach adopts a mesh to the image with a resolution up to one vertex per pixel and uses angle constraints to ensure sensible local deformations between image pairs. The Laplacian Mesh constraints are expressed wholly inside the optical flow optimization, and can be applied in a straightforward manner to a wide range of image tracking and registration problems. We evaluate our approach by testing on several benchmark datasets, including the Middlebury and Garg et al. datasets. In addition, we show application of our method for constructing 3D Morphable Facial Models from dynamic 3D data

    Vision-based range estimation using helicopter flight data

    Get PDF
    Pilot aiding during low-altitude flight depends on the ability to detect and locate obstacles near the helicopter's intended flightpath. Computer-vision-based methods provide one general approach for obstacle detection and range estimation. Several algorithms have been developed for this purpose, but have not been tested with actual flight data. This paper presents results obtained using helicopter flight data with a feature-based range estimation algorithm. A method for recursively estimating range using a Kalman filter with a monocular sequence of images and knowledge of the camera's motion is described. The helicopter flight experiment and four resulting datasets are discussed. Finally the performance of the range estimation algorithm is explored in detail based on comparison of the range estimates with true range measurements collected during the flight experiment

    Cluster of galaxies around seven radio-loud QSOs at 1<z<1.6: K-band images

    Get PDF
    We have conducted a NIR study of the environments of seven radio-loud quasars at redshifts 1<z<1.6. In present paper we describe deep KK band images obtained for the fields of ~6X6 arcmin around the quasars with 3σ\sigma limiting magnitudes of K~20.5. These fields were previously studied using deep B and R band images (Sanchez & Gonzalez-Serrano 1999). Using together optical and NIR data, it has been found a significant excess of galaxies which optical-NIR colours, luminosity, spatial scale, and number of galaxies are compatible with clusters at the redshift of the quasar. We have selected a sample of cluster candidates analyzing the R-K vs. K diagram. A ~25% of the candidates present red optical-NIR colours and an ultraviolet excess. This population has been also found in clusters around quasars at the same redshifts (Tanaka et al. 2000; Haines et al. 2001). These galaxies seem to follow a mixed evolution: a main passive evolution plus late starformation processes. The quasars do not inhabit the core of the clusters, being found in the outer regions. This result agrees with the hypothesis that the origin/feeding mechanism of the nuclear activity were merging processes. The quasars inhabit the region were a collision is most probably to produce a merger.Comment: 15 pages. A&A, accepted for publishin
    corecore