7 research outputs found

    Shape-adapted smoothing in estimation of 3-D depth cues from affine distortions of local 2-D brightness structure

    No full text
    Rotationally symmetric operations in the image domain may give rise to shape distortions. This article describes a way of reducing this effect for a general class of methods for deriving 3-D shape cues from 2-D image data, which are based on the estimation of locally linearized distortion of brightness patterns. By extending the linear scale-space concept into an affine scale-space representation and performing affine shape adaption of the smoothing kernels, the accuracy of surface orientation estimates derived from texture and disparity cues can be improved by typically one order of magnitude. The reason for this is that the image descriptors, on which the methods are based, will be relative invariant under a ne transformations, and the error will thus be confined to the higher-order terms in the locally linearized perspective mapping

    Shape-adapted smoothing in estimation of 3-D depth cues from affine distortions of local 2-D brightness structure

    No full text
    This article describes a method for reducing the shape distortions due to scale-space smoothing that arise in the computation of 3-D shape cues using operators (derivatives) defined from scale-space representation. More precisely, we are concerned with a general class of methods for deriving 3-D shape cues from a 2-D image data based on the estimation of locally linearized deformations of brightness patterns. This class constitutes a common framework for describing several problems in computer vision (such as shape-from-texture, shape-from disparity-gradients, and motion estimation) and for expressing different algorithms in terms of similar types of visual front-end-operations. It is explained how surface orientation estimates will be biased due to the use of rotationally symmetric smoothing in the image domain. These effects can be reduced by extending the linear scale-space concept into an affine Gaussian scalespace representation and by performing affine shape adaptation of the smoothing kernels. This improves the accuracy of the surface orientation estimates, since the image descriptors, on which the methods are based, will be relative invariant under affine transformations, and the error thus confined to the higher-order terms in the locally linearized perspective transformation. A straightforward algorithm is presented for performing shape adaptation in practice. Experiments on real and synthetic images with known orientation demonstrate that in the presence of moderately high noise levels the accuracy is improved by typically one order of magnitude.QC 20130423</p

    A method to improve interest point detection and its GPU implementation

    Get PDF
    Interest point detection is an important low-level image processing technique with a wide range of applications. The point detectors have to be robust under affine, scale and photometric changes. There are many scale and affine invariant point detectors but they are not robust to high illumination changes. Many affine invariant interest point detectors and region descriptors, work on the points detected using scale invariant operators. Since the performance of those detectors depends on the performance of the scale invariant detectors, it is important that the scale invariant initial stage detectors should have good robustness. It is therefore important to design a detector that is very robust to illumination because illumination changes are the most common. In this research the illumination problem has been taken as the main focus and have developed a scale invariant detector that has good robustness to illumination changes. In the paper [6] it has been proved that by using contrast stretching technique the performance of the Harris operator improved considerably for illumination variations. In this research the same contrast stretching function has been incorporated into two different scale invariant operators to make them illumination invariant. The performances of the algorithms are compared with the Harris-Laplace and Hessian-Laplace algorithms [15]

    Situational Awareness Using A Single Omnidirectional Camera

    Get PDF
    To retrieve scene informations using a single omnidirectional camera, we have based our work on a shape from texture method proposed by Lindeberg. To do so, we have adapted the method of Lindeberg, that was developed for planar images, in order to use it on the sphere S2 . The mathematical tools we use are stereographic dilation to implement scale variations for the scale-space representation, and ïŹlter steerability on the sphere to decrease computational order. The texture distortions due to the pro jection from the real world to the image contain the informations that enable shape and orientation to be computed. A multi-scale texture descriptor, the windowed second moment matrix, that contains distorsions informations, is computed and analyzed, with some assumptions about the surface texture to retrieve surface orientation. We have used synthetic signals to evaluate the performances of the adapted method. The obtained results for the distance and shape estimations are good when the textures on the surfaces correspond to the assumptions, generally around the equatorial plane of S2 , but when we move away from the equator, the precision of the estimations decreases signiïŹcantly

    3D coarse-to-fine reconstruction from multiple image sequences.

    Get PDF
    Ip Che Yin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 119-127).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Previous Work --- p.2Chapter 1.2.1 --- Reconstruction for Architecture Scene --- p.2Chapter 1.2.2 --- Super-resolution --- p.4Chapter 1.2.3 --- Coarse-to-Fine Approach --- p.4Chapter 1.3 --- Proposed solution --- p.6Chapter 1.4 --- Contribution --- p.6Chapter 1.5 --- Publications --- p.7Chapter 1.6 --- Layout of the thesis --- p.7Chapter 2 --- Background Techniques --- p.8Chapter 2.1 --- Interest Point Detectors --- p.8Chapter 2.1.1 --- Scale-space --- p.9Chapter 2.1.2 --- Harris Corner detectors --- p.10Chapter 2.1.3 --- Other Kinds of Interest Point Detectors --- p.17Chapter 2.1.4 --- Summary --- p.18Chapter 2.2 --- Steerable filters --- p.19Chapter 2.2.1 --- Orientation estimation --- p.20Chapter 2.3 --- Point Descriptors --- p.22Chapter 2.3.1 --- Image derivatives under illumination change --- p.23Chapter 2.3.2 --- Image derivatives under geometric scale change --- p.24Chapter 2.3.3 --- An example of a point descriptor --- p.25Chapter 2.3.4 --- Other examples --- p.25Chapter 2.4 --- Feature Tracking Techniques --- p.26Chapter 2.4.1 --- Kanade-Lucas-Tomasi (KLT) Tracker --- p.26Chapter 2.4.2 --- Guided Tracking Algorithm --- p.28Chapter 2.5 --- RANSAC --- p.29Chapter 2.6 --- Structure-from-motion (SFM) Algorithm --- p.31Chapter 2.6.1 --- Factorization methods --- p.33Chapter 2.6.2 --- Epipolar Geometry --- p.39Chapter 2.6.3 --- Bundle Adjustment --- p.47Chapter 2.6.4 --- Summary --- p.50Chapter 3 --- Hierarchical Registration of 3D Models --- p.52Chapter 3.1 --- Overview --- p.53Chapter 3.1.1 --- The Arrangement of Image Sequences --- p.53Chapter 3.1.2 --- The Framework --- p.54Chapter 3.2 3 --- D Model Reconstruction for Each Sequence --- p.57Chapter 3.3 --- Multi-scale Image Matching --- p.59Chapter 3.3.1 --- Scale-space interest point detection --- p.61Chapter 3.3.2 --- Point descriptor --- p.61Chapter 3.3.3 --- Point-to-point matching --- p.63Chapter 3.3.4 --- Image transformation estimation --- p.64Chapter 3.3.5 --- Multi-level image matching --- p.66Chapter 3.4 --- Linkage Establishment --- p.68Chapter 3.5 --- 3D Model Registration --- p.70Chapter 3.6 --- VRML Modelling --- p.73Chapter 4 --- Experiment --- p.74Chapter 4.1 --- Synthetic Experiments --- p.74Chapter 4.1.1 --- Study on Rematching Algorithm --- p.74Chapter 4.1.2 --- Comparison between Affine and Metric transforma- tions for 3D Registration --- p.80Chapter 4.2 --- Real Scene Experiments --- p.86Chapter 5 --- Conclusion --- p.112Chapter 5.1 --- Future Work --- p.114Chapter A --- Camera Parameters --- p.116Chapter A.1 --- Intrinsic Parameters --- p.116Chapter A.2 --- Extrinsic Parameters --- p.117Bibliography --- p.12
    corecore