71,740 research outputs found

    Prime component decomposition of images and its applications in an image understanding system

    Get PDF
    A reliable and flexible model of a low-level processing stage is one of the most crucial requirements in the development of an image understanding system (IUS). In this thesis, a model for the low-level processing stage based on a new scheme of prime component decomposition is proposed. This model is then used to develop a knowledge-based image understanding system that is capable of solving many image processing problems without employing complex algorithms. A scheme for the prime component decomposition that utilizes the maximum size geometrical polygons is devised. It is shown that the optimal decomposition element in the continuous metric space has a circular shape. The decomposition operator is also optimized in the discrete metric space to deal with the actual implementation of the prime component decomposition operator, yielding square decomposition elements. The derived decomposition operator is used to extract shape elements of the objects contained in input scenes and to produce their intermediate object descriptions. In the proposed approach of shape extraction, the prime component decomposition technique is used to partition the object's interior, while a modified Sobel operator is used to detect the object's edges. The typical errors of a shape extraction process such as noise sensitivity, description errors of diagonal objects and the description errors caused by a small sampling frequency are reduced using a shape equalization approach that is based on Fourier descriptors and nonlinear interpolation. In the development of an image understanding system, a hierarchical approach of constructing the intermediate object representation is used to represent the knowledge within the system. The knowledge base of the IUS is developed as a relational multidimensional tree structure that dynamically changes the relational links among its elements. The dynamical process of creating and transforming the knowledge base is controlled by a feedback with the low-level processing stage that reduces the memory requirements of the IUS. The traditional data type definitions are extended to include the base and derived data types. These extensions effectively represent and process the time-varying knowledge of the system and increase its overall efficiency. The high-level processing stage of the IUS is implemented based on the black-board architecture with a specialized control mechanism--the agenda-based control. This control mechanism reduces the number of computational steps within the high-level processing stage by employing a selective focusing mechanism. The functional behaviour of the proposed prime component decomposition scheme and the model of the image understanding system is experimented with several application examples including the isolation and identification of stationary and time-varying objects

    Variational Domain Decomposition For Parallel Image Processing

    Full text link
    Many important techniques in image processing rely on partial differential equation (PDE) problems, which exhibit spatial couplings between the unknowns throughout the whole image plane. Therefore, a straightforward spatial splitting into independent subproblems and subsequent parallel solving aimed at diminishing the total computation time does not lead to the solution of the original problem. Typically, significant errors at the local boundaries between the subproblems occur. For that reason, most of the PDE-based image processing algorithms are not directly amenable to coarse-grained parallel computing, but only to fine-grained parallelism, e.g. on the level of the particular arithmetic operations involved with the specific solving procedure. In contrast, Domain Decomposition (DD) methods provide several different approaches to decompose PDE problems spatially so that the merged local solutions converge to the original, global one. Thus, such methods distinguish between the two main classes of overlapping and non-overlapping methods, referring to the overlap between the adjacent subdomains on which the local problems are defined. Furthermore, the classical DD methods --- studied intensively in the past thirty years --- are primarily applied to linear PDE problems, whereas some of the current important image processing approaches involve solving of nonlinear problems, e.g. Total Variation (TV)-based approaches. Among the linear DD methods, non-overlapping methods are favored, since in general they require significanty fewer data exchanges between the particular processing nodes during the parallel computation and therefore reach a higher scalability. For that reason, the theoretical and empirical focus of this work lies primarily on non-overlapping methods, whereas for the overlapping methods we mainly stay with presenting the most important algorithms. With the linear non-overlapping DD methods, we first concentrate on the theoretical foundation, which serves as basis for gradually deriving the different algorithms thereafter. Although we make a connection between the very early methods on two subdomains and the current two-level methods on arbitrary numbers of subdomains, the experimental studies focus on two prototypical methods being applied to the model problem of estimating the optic flow, at which point different numerical aspects, such as the influence of the number of subdomains on the convergence rate, are explored. In particular, we present results of experiments conducted on a PC-cluster (a distributed memory parallel computer based on low-cost PC hardware for up to 144 processing nodes) which show a very good scalability of non-overlapping DD methods. With respect to nonlinear non-overlapping DD methods, we pursue two distinct approaches, both applied to nonlinear, PDE-based image denoising. The first approach draws upon the theory of optimal control, and has been successfully employed for the domain decomposition of Navier-Stokes equations. The second nonlinear DD approach, on the other hand, relies on convex programming and relies on the decomposition of the corresponding minimization problems. Besides the main subject of parallelization by DD methods, we also investigate the linear model problem of motion estimation itself, namely by proposing and empirically studying a new variational approach for the estimation of turbulent flows in the area of fluid mechanics

    Variational and Partial Differential Equation Models for Color Image Denoising and Their Numerical Approximations using Finite Element Methods

    Get PDF
    Image processing has been a traditional engineering field, which has a broad range of applications in science, engineering and industry. Not long ago, statistical and ad hoc methods had been main tools for studying and analyzing image processing problems. In the past decade, a new approach based on variational and partial differential equation (PDE) methods has emerged as a more powerful approach. Compared with old approaches, variational and PDE methods have remarkable advantages in both theory and computation. It allows to directly handle and process visually important geometric features such as gradients, tangents and curvatures, and to model visually meaningful dynamic process such as linear and nonlinear diffusions. Computationally, it can greatly benefit from the existing wealthy numerical methods for PDEs. Mathematically, a (digital) greyscale image is often described by a matrix and each entry of the matrix represents a pixel value of the image and the size of the matrix indicates the resolution of the image. A (digital) color image is a digital image that includes color information for each pixel. For visually acceptable results, it is necessary (and almost sufficient) to provide three color channels for each pixel, which are interpreted as coordinates in some color space. The RGB (Red, Green, Blue) color space is commonly used in computer displays. Mathematically, a RGB color image is described by a stack of three matrices so that each color pixel value of the RGB color image is represented by a three-dimensional vector consisting values from the RGB channels. The brightness and chromaticity (or polar) decomposition of a color image means to write the three-dimensional color vector as the product of its length, which is called the brightness, and its direction, which is defined as the chromaticity. As a result, the chromaticity must lie on the unit sphere S2 in R3. The primary objectives of this thesis are to present and to implement a class of variational and PDE models and methods for color image denoising based on the brightness and chromaticity decomposition. For a given noisy digital image, we propose to use the well-known Total Variation (TV) model to denoise its brightness and to use a generalized p-harmonic map model to denoise its chromaticity. We derive the Euler-Lagrange equations for these models and formulate the gradient descent method (in the name of gradient flows) for computing the solutions of these equations. We then formulate finite element schemes for approximating the gradient flows and implement these schemes on computers using Matlab® and Comsol Multiphysics® software packages. Finally, we propose some generalizations of the p-harmonic map model, and numerically compare these models with the well-known channel-by-channel model
    • …
    corecore