21 research outputs found
Recommended from our members
Surface Orientation from Texture Autocorrelation
We report on a refinement of our technique for determining the orientation of a textured surface from the two-point autocorrelation function of its image. We replace our previous assumptions of isotropic texture by knowledge of the autocorrelation moment matrix of the texture when viewed head on. The orientation of a textured surface is then deduced from the effects of foreshortening on these autocorrelation moments. This technique is applied to natural images of planar textured surfaces and gives significantly improved results when applied to anisotropic textures which under the assumption of isotropy mimic the effects of projective foreshortening. The potential practicality of this method for higher level image understanding systems is discussed
Shape from periodic texture using the eigenvectors of local affine distortion
This paper shows how the local slant and tilt angles of regularly textured curved surfaces can be estimated directly, without the need for iterative numerical optimization, We work in the frequency domain and measure texture distortion using the affine distortion of the pattern of spectral peaks. The key theoretical contribution is to show that the directions of the eigenvectors of the affine distortion matrices can be used to estimate local slant and tilt angles of tangent planes to curved surfaces. In particular, the leading eigenvector points in the tilt direction. Although not as geometrically transparent, the direction of the second eigenvector can be used to estimate the slant direction. The required affine distortion matrices are computed using the correspondences between spectral peaks, established on the basis of their energy ordering. We apply the method to a variety of real-world and synthetic imagery
Recommended from our members
Image Understanding and Robotics Research at Columbia University
The research investigations of the Vision/Robotics Laboratory at Columbia University reflect the diversity of interests of its four faculty members, two staff programmers and 15 Ph.D. students. Several of the projects involve either a visiting computer science post-doc, other faculty members in the department or the university, or researchers at AT&T Bell Laboratories or Philips laboratories. We list below a summary of our interest and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
Recommended from our members
Shape From Textures: A Paradigm for Fusing Middle Level Vision Cues
This research proposes a new approach to the problem of deriving the orientation, segmentation, and classification of surfaces based on multiple independent textual cues. The generality of this approach is due to the interaction between textural cues, thus allowing it to extract shape information from a wider range of textured surfaces than any individual method. The method consists of three major phases: the calculation of orientation constraints for subimage elements called "texel patches", the consolidation of constraints into a "most likely" orientation per patch, and finally the reconstruction of the surface. During the first phase, the different shape-from-texture components generate augmented texels. Each augmented texel consists of the 2-D description of a texel patch and a list of weighted constraints on its orientation. The orientation constraints for each patch are potentially inconsistent or potentially incorrect because the shape-from methods are applied to noisy images, locally based, and derive constraints without a priori knowledge of the type of texture or number of surfaces. The constraints are weighted by each shape-from method based on an intra-cue correctness factor. This factor attempts to measure how closely the constraint fulfill the underlying assumptions of the cue. The orientation constraints' weights are then normalized between cues in order to assure that no cue predominates unfairly. In the second phase, all the orientation constraints for each augmented texel are consolidated into a single "most likely" orientation by a Hough-like transformation on a tesselated Gaussian sphere. The system iteratively reanalyzes each of the texel patches, calculating the "most likely" orientations for each patch. Finally, the system re-analyzes the orientation constraints to determine which augmented texels are part of the same constraint family and which cues were used to generated the valid constraints. In effect, this both segments the image into regions of similar orientation and supplies texture classification information. The robustness of this approach is illustrated by a system that fuses the orientation constraints of five shape-from cues and solves real camera-acquired imagery
Modeling, Estimation, and Pattern Analysis of Random Texture on 3-D Surfaces
To recover 3-D structure from a shaded and textural surface image involving textures, neither the Shape-from-shading nor the Shape-from-texture analysis is enough, because both radiance and texture information coexist within the scene surface. A new 3-D texture model is developed by considering the scene image as the superposition of a smooth shaded image and a random texture image. To describe the random part, the orthographical projection is adapted to take care of the non-isotropic distribution function of the intensity due to the slant and tilt of a 3-D textures surface, and the Fractional Differencing Periodic (FDP) model is chosen to describe the random texture, because this model is able to simultaneously represent the coarseness and the pattern of the 3-D texture surface, and enough flexible to synthesize both long-term and short-term correlation structures of random texture. Since the object is described by the model involving several free parameters and the values of these parameters are determined directly from its projected image, it is possible to extract 3-D information and texture pattern directly from the image without any preprocessing. Thus, the cumulative error obtained from each pre-processing can be minimized. For estimating the parameters, a hybrid method which uses both the least square and the maximum likelihood estimates is applied and the estimation of parameters and the synthesis are done in frequency domain. Among the texture pattern features which can be obtained from a single surface image, Fractal scaling parameter plays a major role for classifying and/or segmenting the different texture patterns tilted and slanted due to the 3-dimensional rotation, because of its rotational and scaling invariant properties. Also, since the Fractal scaling factor represents the coarseness of the surface, each texture pattern has its own Fractal scale value, and particularly at the boundary between the different textures, it has relatively higher value to the one within a same texture. Based on these facts, a new classification method and a segmentation scheme for the 3-D rotated texture patterns are develope
Modeling of Locally Scaled Spatial Point Processes, and Applications in Image Analysis
Spatial point processes provide a statistical framework for modeling random arrangements of objects, which is of relevance in a variety of scientific disciplines, including ecology, spatial epidemiology and material science. Describing systematic spatial variations within this framework and developing methods for estimating parameters from empirical data constitute an active area of research. Image analysis, in particular, provides a range of scenarios to which point process models are applicable. Typical examples are images of trees in remote sensing, cells in biology, or composite structures in material science. Due to its real-world orientation and versatility, the class of the recently developed locally scaled point processes appears particularly suitable for the modeling of spatial object patterns. An unknown normalizing constant in the likelihood, however, makes inference complicated and requires elaborate techniques. This work presents an efficient Bayesian inference concept for locally scaled point processes. The suggested optimization procedure is applied to images of cross-sections through the stems of maize plants, where the goal is to accurately describe and classify different genotypes based on the spatial arrangement of their vascular bundles. A further spatial point process framework is specifically provided for the estimation of shape from texture. Texture learning and the estimation of surface orientation are two important tasks in pattern analysis and computer vision. Given the image of a scene in three-dimensional space, a frequent goal is to derive global geometrical knowledge, e.g. information on
camera positioning and angle, from the local textural characteristics in the image. The statistical framework proposed comprises locally scaled point process strategies as well as the draft of a Bayesian marked point process model for inferring shape from texture