129 research outputs found
Two-dimensional forward-looking sonar image registration by maximization of peripheral mutual information
Camera motion estimation through planar deformation determination
In this paper, we propose a global method for estimating the motion of a
camera which films a static scene. Our approach is direct, fast and robust, and
deals with adjacent frames of a sequence. It is based on a quadratic
approximation of the deformation between two images, in the case of a scene
with constant depth in the camera coordinate system. This condition is very
restrictive but we show that provided translation and depth inverse variations
are small enough, the error on optical flow involved by the approximation of
depths by a constant is small. In this context, we propose a new model of
camera motion, that allows to separate the image deformation in a similarity
and a ``purely'' projective application, due to change of optical axis
direction. This model leads to a quadratic approximation of image deformation
that we estimate with an M-estimator; we can immediatly deduce camera motion
parameters.Comment: 21 pages, version modifi\'ee accept\'e le 20 mars 200
The feasibility of motion and structure from noisy time-varying image velocity information
Recommended from our members
On 3-D reconstruction from stereo FS sonar imaging
New generation of 2-D sonar cameras operating at 1-2 MHz provide images with enhanced target details in highly turbid waters, although range is reduced by one to two orders of magnitude compared to traditional low-/mid-frequency systems. Therefore, they are suitable imaging systems for the short-range inspection of underwater structures. As for 2-D optical images, multiple object images from nearby viewing positions may be utilized for 3-D shape reconstruction based on similar visual cues as the motion parallax. In this paper, we address the reconstruction of 3-D points from two sonar views, acquired either simultaneously by two cameras, or from a single camera at two known relative positions. We investigate a number of linear algorithms for 3-D reconstruction from matches in two views, and examine some degenerate configurations. While these do not provide an optimal solution, e.g., in the Maximum Likelihood sense, they can offer a good initial condition to ensure effective convergence of the ML estimate. We present results of experiments with synthetic and real data in support of our theoretical contributions
Recommended from our members
Revised definition of optical flow: integration of radiometric and geometric cues for dynamic scene analysis
Optical flow has been commonly defined as the apparent motion of image brightness patterns in an image sequence. In this paper, we propose a revised definition to overcome shortcomings in interpreting optical flow merely as a geometric transformation field. The new definition is a complete representation of geometric and radiometric variations in dynamic imagery. We argue that this is more consistent with the common interpretation of optical flow induced by various scene events. This leads to a general framework for the investigation of problems in dynamic scene analysis, based on the integration and unified treatment of both geometric and radiometric cues in time-varying imagery. We discuss selected models, including the generalized dynamic image model, for the estimation of optical flow. We show how various 3D scene information are encoded in, and thus may be extracted from, the geometric and radiometric components of optical flow. We provide selected examples based on experiments with real images
Visual motion ambiguities of a plane in 2-D FS sonar motion sequences
â–º Addressing plane motion interpretation ambiguities in underwater 2D forward-scan sonar images. â–º Same sonar matches in two views arise from two different motions, even with pure translation or rotation. â–º Corresponding world planes are mirror images with respect to sonar image plane. â–º Two solutions often are feasible motions, not resolved by physical (e.g., visibility) constraints. â–º Applications include sonar-based visual servo of subsea platforms in turbid environments.
Sonar is the most common imaging modality in underwater, and high-resolution high data rate 2-D video systems have been emerging in recent years. As for visually guided terrestrial robot navigation and target-based positioning, the estimation of 3-D motion by tracking features in recorded 2-D sonar images is also a highly desirable capability for submersible platforms. Additionally, theoretical results dealing with robustness and multiplicity of solution constitute important fundamental findings due to nature of sonar data, namely, high noise level, narrow field of view coverage, scarcity of robust features, and incorrect matches.
This paper explores the inherent ambiguities of 3-D motion and scene structure interpretation from 2-D forward-scan sonar image sequences. Analyzing the sonar image motion transformation model, which depends on the affine components of the projective transformation (or homography) of two plane views, we show that two interpretations are commonly inferred. The true and spurious planes form mirror images relative to the zero-elevation plane of the sonar reference frame. Even under each of pure rotation or translation, a spurious motion exists comprising both translational and rotational components. In some cases, the two solutions share certain motion components, where the imaged surface becomes parallel to a plane defined by two of the sonar coordinate axes. A unique solution exists under the very special condition where the sonar motion aligns the imaged plane with the zero-elevation planes. We also derive the relationship between the two interpretations, thus allowing closed-form computation of both solutions
Recommended from our members
A new method for calibration of an opti-acoustic stereo imaging system
Range measurements offer a key advantage in deploying a sonar and an optical camera as a multi-modal stereo imaging system for 3-D object shape recovery in underwater. Establishing the relative pose of these cameras by a priori calibration enables exploiting the opti-acoustic epipolar geometry, and confine the complex multi-modal correspondence problem to a 1-D search. By imaging points on a known planar grid with distinct visual and acoustic reflectance properties, we previously proposed an iterative optimization scheme for opti-acoustic stereo calibration, requiring a search in 6+3N space based on N opti-acoustic stereo pairs of the grid at different orientations. In this paper, we present a new calibration method, where most of the computations are carried out in closed form, with a final search in 3-D space. Although extendable to make use of data from several views, the current implementation applies to only one view of the target grid. Thus, the results can be less accurate than the solution from the iterative method, for sonar cameras with a small field of view. We assess the performance of this method and discuss its merits based on experimental results with synthetic and real data
- …