4 research outputs found

    Image Segmentation by Energy and Related Functional Minimization Methods

    Get PDF
    Effective and efficient methods for partitioning a digital image into image segments, called ¿image segmentation,¿ have a wide range of applications that include pattern recognition, classification, editing, rendering, and compressed data for image search. In general, image segments are described by their geometry and similarity measures that identify them. For example, the well-known optimization model proposed and studied in depth by David Mumford and Jayant Shah is based on an L2 total energy functional that consists of three terms that govern the geometry of the image segments, the image fidelity (or closeness to the observed image), and the prior (or image smoothness). Recent work in the field of image restoration suggests that a more suitable choice for the fidelity measure is, perhaps, the l1 norm. This thesis explores that idea applied to the study of image segmentation along the line of the Mumford and Shah optimization model, but eliminating the need of variational calculus and regularization schemes to derive the approximating Euler-Lagrange equations. The main contribution of this thesis is a formulation of the problem that avoids the need for the calculus of variation. The energy functional represents a global property of an image. It turns out to be possible, however, to predict how localized changes to the segmentation will affect its value. This has been shown previously in the case of the l2 norm, but no similar method is available for other norms. The method described here solves the problem for the l1 norm, and suggests how it would apply to other forms of the fidelity measure. Existing methods rely on a fixed initial condition. This can lead to an algorithm finding local instead of global optimizations. The solution given here shows how to specify the initial condition based on the content of the image and avoid finding local minima

    Spell checkers and correctors : a unified treatment

    Get PDF
    The aim of this dissertation is to provide a unified treatment of various spell checkers and correctors. Firstly, the spell checking and correcting problems are formally described in mathematics in order to provide a better understanding of these tasks. An approach that is similar to the way in which denotational semantics used to describe programming languages is adopted. Secondly, the various attributes of existing spell checking and correcting techniques are discussed. Extensive studies on selected spell checking/correcting algorithms and packages are then performed. Lastly, an empirical investigation of various spell checking/correcting packages is presented. It provides a comparison and suggests a classification of these packages in terms of their functionalities, implementation strategies, and performance. The investigation was conducted on packages for spell checking and correcting in English as well as in Northern Sotho and Chinese. The classification provides a unified presentation of the strengths and weaknesses of the techniques studied in the research. The findings provide a better understanding of these techniques in order to assist in improving some existing spell checking/correcting applications and future spell checking/correcting package designs and implementations.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    A Note on Median Split Trees

    No full text
    corecore