131 research outputs found

    Edge and Line Feature Extraction Based on Covariance Models

    Get PDF
    age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image

    Consistency checks for particle filters with application to image stabilization

    Get PDF
    An ‘inconsistent’ particle filter produces – in a statistical sense – larger estimation errors than predicted by the model on which the filter is based. Inconsistent behavior of a particle filter can be detected online by checking whether the predicted measurements (derived from the particles that represent the one-step-ahead prediction pdf) comply in a statistical sense with the observed measurements. This principle is demonstrated in an image stabilization application. We consider an image sequence of a scene consisting of a dynamic foreground and a static background. The motion of the camera (slow rotations and zooming) is modeled with an 8-dim state vector describing a projective geometrical transformation that, inversely applied to the current frame, compensates the camera motion. The dynamics of the state vector is modeled as a first order AR process. The measurements of the system are corner points\ud (detected in the first frame) that are tracked. The particle filtering estimates the state vector using the measurements. However, the filter behaves inconsistently because a few corner points belong to the foreground. Using inconsistency checks these foreground points are detected and removed from the list of measurements

    Better features to track by estimating the tracking convergence region

    Get PDF
    Reliably tracking key points and textured patches from frame to frame is the basic requirement for many bottomup computer vision algorithms. The problem of selecting the features that can be tracked well is addressed here. The Lucas-Kanade tracking procedure is commonly used. We propose a method to estimate the size of the tracking procedure convergence region for each feature. The features that have a wider convergence region around them should be tracked better by the tracker. The size of the convergence region as a new feature goodness measure is compared with the widely accepted Shi-Tomasi feature selection criteria

    A stabilized adaptive appearance changes model for 3D head tracking

    Get PDF
    A simple method is presented for 3D head pose estimation and tracking in monocular image sequences. A generic geometric model is used. The initialization consists of aligning the perspective projection of the geometric model with the subjects head in the initial image. After the initialization, the gray levels from the initial image are mapped onto the visible side of the head model to form a textured object. Only a limited number of points on the object is used allowing real-time performance even on low-end computers. The appearance changes caused by movement in the complex light conditions of a real scene present a big problem for fitting the textured model to the data from new images. Having in mind real human-computer interfaces we propose a simple adaptive appearance changes model that is updated by the measurements from the new images. To stabilize the model we constrain it to some neighborhood of the initial gray values. The neighborhood is defined using some simple heuristic

    Local Stereo Matching Using Adaptive Local Segmentation

    Get PDF
    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The preprocessing step smoothes low textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences; and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the fronto-parallel assumption, our algorithm is the best ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face

    Time-of-flight estimation based on covariance models

    Get PDF
    We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a non-stationary auto-covariance function leads to a new estimator for the ToF of an acoustic tone burst. This estimator is a generalization of the well known matched filter. In many practical circumstances, for instance beacon-based position estimation in indoor situations, lack of knowledge of the additional reflections can lead to large estimation errors. Experiments show that the application of the new estimator can reduce these errors by a factor of about four. The cost of this improvement is an increase in computational complexity by a factor of about seven
    corecore