2,126 research outputs found
Robot navigation control based on monocular images: An image processing algorithm for obstacle avoidance decisions
This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space, this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle) could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings shows that the algorithm performed strongly on solid coloured carpets, wooden and concrete floors but had difficulty in separating colours in multi-coloured floor types such as patterned carpets
Edge and Line Feature Extraction Based on Covariance Models
age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image
Detection of dirt impairments from archived film sequences : survey and evaluations
Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research
Cluster, Classify, Regress: A General Method For Learning Discountinous Functions
This paper presents a method for solving the supervised learning problem in
which the output is highly nonlinear and discontinuous. It is proposed to solve
this problem in three stages: (i) cluster the pairs of input-output data
points, resulting in a label for each point; (ii) classify the data, where the
corresponding label is the output; and finally (iii) perform one separate
regression for each class, where the training data corresponds to the subset of
the original input-output pairs which have that label according to the
classifier. It has not yet been proposed to combine these 3 fundamental
building blocks of machine learning in this simple and powerful fashion. This
can be viewed as a form of deep learning, where any of the intermediate layers
can itself be deep. The utility and robustness of the methodology is
illustrated on some toy problems, including one example problem arising from
simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure
Joint Image Reconstruction and Segmentation Using the Potts Model
We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data
Wavelet Domain Image Separation
In this paper, we consider the problem of blind signal and image separation
using a sparse representation of the images in the wavelet domain. We consider
the problem in a Bayesian estimation framework using the fact that the
distribution of the wavelet coefficients of real world images can naturally be
modeled by an exponential power probability density function. The Bayesian
approach which has been used with success in blind source separation gives also
the possibility of including any prior information we may have on the mixing
matrix elements as well as on the hyperparameters (parameters of the prior laws
of the noise and the sources). We consider two cases: first the case where the
wavelet coefficients are assumed to be i.i.d. and second the case where we
model the correlation between the coefficients of two adjacent scales by a
first order Markov chain. This paper only reports on the first case, the second
case results will be reported in a near future. The estimation computations are
done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the
performances of the proposed method. Keywords: Blind source separation,
wavelets, Bayesian estimation, MCMC Hasting-Metropolis algorithm.Comment: Presented at MaxEnt2002, the 22nd International Workshop on Bayesian
and Maximum Entropy methods (Aug. 3-9, 2002, Moscow, Idaho, USA). To appear
in Proceedings of American Institute of Physic
- …