71 research outputs found
Model-based object recognition from a complex binary imagery using genetic algorithm
This paper describes a technique for model-based object recognition in a noisy and cluttered environment, by extending the work presented in an earlier study by the authors. In order to accurately model small irregularly shaped objects, the model and the image are represented by their binary edge maps, rather then approximating them with straight line segments. The problem is then formulated as that of finding the best describing match between a hypothesized object and the image. A special form of template matching is used to deal with the noisy environment, where the templates are generated on-line by a Genetic Algorithm. For experiments, two complex test images have been considered and the results when compared with standard techniques indicate the scope for further research in this direction
Co-dimension 2 Geodesic Active Contours for MRA Segmentation
Automatic and semi-automatic magnetic resonance angiography (MRA)s segmentation techniques can potentially save radiologists larges amounts of time required for manual segmentation and cans facilitate further data analysis. The proposed MRAs segmentation method uses a mathematical modeling technique whichs is well-suited to the complicated curve-like structure of bloods vessels. We define the segmentation task as ans energy minimization over all 3D curves and use a level set methods to search for a solution. Ours approach is an extension of previous level set segmentations techniques to higher co-dimension
Superquadric-Based Object Recognition
This paper proposes a technique for object recognition using superquadric built models. Superquadrics, which are three dimensional models suitable for part-level representation of objects, are reconstructed from range images using the recover-and-select paradigm. Using an interpretation three, the presence of an object in the scene from the model database can be hypothesized. These hypotheses are verified by projecting and re-fitting the object model to the range image which at the same time enables a better localization of the object in the scene
Smoothing and Matching of 3-D Space Curves
International audienceWe present a new approach to the problem of matching 3-D curves. The approach has a low algorithmic complexity in the number of models, and can operate in the presence of noise and partial occlusions. Our method builds upon the seminal work of Kishon et al. (1990), where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures. However, we introduce two enhancements: -- We make use of nonuniform B-spline approximations, which permits us to better retain information at highcurvature locations. The spline approximations are controlled (i.e., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy. These measures allow a more accurate estimation of position, curvature, torsion, and Frtnet frames along the curve. -- The computational complexity of the recognition process is relatively independent of the number of models and is considerably decreased with explicit use of the Frtnet frame for hypotheses generation. As opposed to previous approaches, the method better copes with partial occlusion. Moreover, following a statistical study of the curvature and torsion covariances, we optimize the hash table discretization and discover improved invariants for recognition, different than the torsion measure. Finally, knowledge of invariant uncertainties is used to compute an optimal global transformation using an extended Kalman filter. We present experimental results using synthetic data and also using characteristic curves extracted from 3-D medical images. An earlier version of this article was presented at the 2nd European Conference on Computer Vision in Italy
Differential Geometry, Surface Patches and Convergence Methods
The problem of constructing a surface from the information provided by the Marr-Poggio theory of human stereo vision is investigated. It is argued that not only does this theory provide explicit boundary conditions at certain points in the image, but that the imaging process also provides implicit conditions on all other points in the image. This argument is used to derive conditions on possible algorithms for computing the surface. Additional constraining principles are applied to the problem; specifically that the process be performable by a local-support parallel network. Some mathematical tools, differential geometry, Coons surface patches and iterative methods of convergence, relevant to the problem of constructing the surface are outlined. Specific methods for actually computing the surface are examined
A Computer Implementation of a Theory of Human Stereo Vision
Recently, Marr and Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented and consists of five steps: (1) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by , the laplacian of a gaussian function. (2) Zero-crossing in the filtered images are found along horizontal scan lines. (3) For each mask size, matching takes place between zero-crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, Marr and Poggio showed that false targets pose only a simple problem. (4) The output of the wide masks can control vergence movements, thus causing small masks to come into low resolution to dealing with small disparities at a high resolution. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2 dimensional sketch. To support the sufficiency of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. As well, statistical assumptions made by Marr and Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail
Binocular Shading and Visual Surface Reconstruction
Zero-crossing or feature-point based stereo algorithms can, by definition, determine explicit depth information only at particular points on the image. To compute a complete surface description, this sparse depth map must be interpolated. A computational theory of this interpolation or reconstruction process, based on a surface consistency constraint, has previously been proposed. In order to provide stronger boundary conditions for the interpolation process, other visual cues to surface shape are examined in this paper. In particular, it is shown that, in principle, shading information from the two views can be used to determine the orientation of the surface normal along the feature-point contours, as well as the parameters of the reflective properties of the surface material. The numerical stability of the resulting equations is also examined
- …