90 research outputs found
Recommended from our members
Revised interpretation of optical flow for dynamic scene analysis
Optical flow has been defined as the apparent motion of image brightness patterns in an image sequence. Interpretation of optical flow has typically been made as a geometric entity, namely an image displacement field, consistent with an assumed image brightness transformation model. In this paper, I propose a new mathematical definition, one that permits a unified representation and interpretation of optical flow, based on both geometric and radiometric properties. I investigate the implication of the new interpretation in the context of selected optical flow models. I present results of selected experiments with real images to emphasize the significance of the revised definition
Direct passive navigation
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1987.MICROFICHE COPY AVAILABLE IN ARCHIVES AND SCIENCEBibliography: leaves 219-225.by Shahriar Negahdaripour.Ph.D
Recommended from our members
Closed-form relationship between the two interpretations of a moving plane
Ambiguities of a Motion Field
We study the conditions under which a perspective motion field can have multiple interpretations. Furthermore, we show that in most cases, the ambiguity in the interpretation of a motion field can be resolved by imposing the physical constraint that depth is positive over the image region onto which the surface projects
Recommended from our members
Shape and range recovery from images of underwater environments
Unmanned underwater vehicles require some sense of perception of the environment to success in carrying out their missions with some degree of autonomy. 'Perception based on visual cues' can tremendously enhance the performance of such vehicles. For example, the capability to determine the three-dimensional position or shape of a nearby object from two-dimensional images can be useful for localization, recognition, or obstacle avoidance. In this paper, we consider selected methods for determining the three-dimensional shape of, or range to, objects from two-dimensional images for application in the underwater domain. In contrast to techniques commonly used in land environments based on tracking points, lines, contours, or other isolated scene features, these methods rely on the information in the brightness patterns of the whole image. Selected experimental results are provided to demonstrate the concepts
Recommended from our members
Epipolar geometry of opti-acoustic stereo imaging
Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity
Recommended from our members
Application of Forward-Scan Sonar Stereo for 3-D Scene Reconstruction
New generation of underwater 2-D forward-look sonar video cameras operating at near and over 1-MHz frequency offer images with enhanced target details. Within their limited range of only tens of meters, they are the most suitable imaging systems for conducting visually guided missions in turbid waters. These include, but are not limited to inspecting underwater structures for routine maintenance, search, and surveillance, the detection, localization, and identification of small sought after objects, as well as target-based positioning and navigation. To this end, the automatic 3-D reconstruction of 3-D target shape and establishing the 3-D spatial location of interest points are highly desired capabilities in many such operations. As achieved with 2-D optical images, multiple images from nearby positions may be utilized. This paper investigates the estimation of 3-D point locations from two overlapping images collected with two forward-scan (FS) sonar systems in stereo configuration, or with one FS sonar at known relative poses, established from visual motion cues. The first contribution includes the analysis of a sonar stereo epipolar geometry. Beyond reducing the correspondence problem to a 1-D search along epipolar curves, this reveals unique properties associated with sonar measurements of range and azimuth angle. Next, certain linear closed-form reconstruction solutions are presented, their degeneracies are established, and adjustments under degenerate conditions are proposed. Two preferred method for the degenerate and nondegenerate stereo configurations are identified: a regularization-based method and one employing a range constraint approximation, respectively. These primarily provide the initial estimate for an iterative nonlinear optimization scheme based on gradient descent. Finally, the results of experiments with synthetic data are presented to assess various estimation methods, and with real data sets to demonstrate performance under two subsea operational scenarios
Direct Computation of the FOE with Confidence Measures
We propose a direct method for locating the focus of expansion (FOE), based on simple parallel computations in selected regions of the image; each is a circular patch around an estimated FOE. Simple computations allow determining the direction from the estimated to the true FOE. The best estimate of the intersection of the so-called FOE constraint lines for several regions gives the location of the FOE. Some analysis allows us to assign a confidence measure to the information from each local region, in order to give more weighting to the most reliable data. Hence, the FOE can be located with more accuracy, even when the data from various local regions lacks sufficient information. Results of experiments with real images of various texture content are given to demonstrate the performance of the method
Recommended from our members
Passive navigation in a planar world (relative camera motion)
The motion of a camera relative to a planar surface induces a second-order motion field in the image. It has been shown that the eight independent coefficients of the resulting motion field can be used to recover the three- dimensional motion of the camera and the orientation of the planar patch in closed form up to a two-fold ambiguity. It is shown here that given the motion field up to the first-order terms for two planar patches in the scene, the three-dimensional motion of the camera, as well as the orientation of the two planes, can be recovered in closed form. In two special cases the orientation of the planes cannot be recovered. In such cases, the traditional one-plane methods can be used. (I.E.
- …