1,175 research outputs found

    Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    Get PDF
    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas

    Optimisation for image processing

    Get PDF
    The main purpose of optimisation in image processing is to compensate for missing, corrupted image data, or to find good correspondences between input images. We note that image data essentially has infinite dimensionality that needs to be discretised at certain levels of resolution. Most image processing methods find a suboptimal solution, given the characteristics of the problem. While the general optimisation literature is vast, there does not seem to be an accepted universal method for all image problems. In this thesis, we consider three interrelated optimisation approaches to exploit problem structures of various relaxations to three common image processing problems: 1. The first approach to the image registration problem is based on the nonlinear programming model. Image registration is an ill-posed problem and suffers from many undesired local optima. In order to remove these unwanted solutions, certain regularisers or constraints are needed. In this thesis, prior knowledge of rigid structures of the images is included in the problem using linear and bilinear constraints. The aim is to match two images while maintaining the rigid structure of certain parts of the images. A sequential quadratic programming algorithm is used, employing dimensional reduction, to solve the resulting discretised constrained optimisation problem. We show that pre-processing of the constraints can reduce problem dimensionality. Experimental results demonstrate better performance of our proposed algorithm compare to the current methods. 2. The second approach is based on discrete Markov Random Fields (MRF). MRF has been successfully used in machine learning, artificial intelligence, image processing, including the image registration problem. In the discrete MRF model, the domain of the image problem is fixed (relaxed) to a certain range. Therefore, the optimal solution to the relaxed problem could be found in the predefined domain. The original discrete MRF is NP hard and relaxations are needed to obtain a suboptimal solution in polynomial time. One popular approach is the linear programming (LP) relaxation. However, the LP relaxation of MRF (LP-MRF) is excessively high dimensional and contains sophisticated constraints. Therefore, even one iteration of a standard LP solver (e.g. interior-point algorithm), may take too long to terminate. Dual decomposition technique has been used to formulate a convex-nondifferentiable dual LP-MRF that has geometrical advantages. This has led to the development of first order methods that take into account the MRF structure. The methods considered in this thesis for solving the dual LP-MRF are the projected subgradient and mirror descent using nonlinear weighted distance functions. An analysis of the convergence properties of the method is provided, along with improved convergence rate estimates. The experiments on synthetic data and an image segmentation problem show promising results. 3. The third approach employs a hierarchy of problem's models for computing the search directions. The first two approaches are specialised methods for image problems at a certain level of discretisation. As input images are infinite-dimensional, all computational methods require their discretisation at some levels. Clearly, high resolution images carry more information but they lead to very large scale and ill-posed optimisation problems. By contrast, although low level discretisation suffers from the loss of information, it benefits from low computational cost. In addition, a coarser representation of a fine image problem could be treated as a relaxation to the problem, i.e. the coarse problem is less ill-conditioned. Therefore, propagating a solution of a good coarse approximation to the fine problem could potentially improve the fine level. With the aim of utilising low level information within the high level process, we propose a multilevel optimisation method to solve the convex composite optimisation problem. This problem consists of the minimisation of the sum of a smooth convex function and a simple non-smooth convex function. The method iterates between fine and coarse levels of discretisation in the sense that the search direction is computed using information from either the gradient or a solution of the coarse model. We show that the proposed algorithm is a contraction on the optimal solution and demonstrate excellent performance on experiments with image restoration problems.Open Acces

    Markov random field image modelling

    Get PDF
    Includes bibliographical references.This work investigated some of the consequences of using a priori information in image processing using computer tomography (CT) as an example. Prior information is information about the solution that is known apart from measurement data. This information can be represented as a probability distribution. In order to define a probability density distribution in high dimensional problems like those found in image processing it becomes necessary to adopt some form of parametric model for the distribution. Markov random fields (MRFs) provide just such a vehicle for modelling the a priori distribution of labels found in images. In particular, this work investigated the suitability of MRF models for modelling a priori information about the distribution of attenuation coefficients found in CT scans

    Automatic lineament analysis techniques for remotely sensed imagery

    Get PDF
    Imperial Users onl

    Multiresolution hierarchy co-clustering for semantic segmentation in sequences with small variations

    Full text link
    This paper presents a co-clustering technique that, given a collection of images and their hierarchies, clusters nodes from these hierarchies to obtain a coherent multiresolution representation of the image collection. We formalize the co-clustering as a Quadratic Semi-Assignment Problem and solve it with a linear programming relaxation approach that makes effective use of information from hierarchies. Initially, we address the problem of generating an optimal, coherent partition per image and, afterwards, we extend this method to a multiresolution framework. Finally, we particularize this framework to an iterative multiresolution video segmentation algorithm in sequences with small variations. We evaluate the algorithm on the Video Occlusion/Object Boundary Detection Dataset, showing that it produces state-of-the-art results in these scenarios.Comment: International Conference on Computer Vision (ICCV) 201

    Adaptive processing of thin structures to augment segmentation of dual-channel structural MRI of the human brain

    Get PDF
    This thesis presents a method for the segmentation of dual-channel structural magnetic resonance imaging (MRI) volumes of the human brain into four tissue classes. The state-of-the-art FSL FAST segmentation software (Zhang et al., 2001) is in widespread clinical use, and so it is considered a benchmark. A significant proportion of FAST’s errors has been shown to be localised to cortical sulci and blood vessels; this issue has driven the developments in this thesis, rather than any particular clinical demand. The original theme lies in preserving and even restoring these thin structures, poorly resolved in typical clinical MRI. Bright plate-shaped sulci and dark tubular vessels are best contrasted from the other tissues using the T2- and PD-weighted data, respectively. A contrasting tube detector algorithm (based on Frangi et al., 1998) was adapted to detect both structures, with smoothing (based on Westin and Knutsson, 2006) of an intermediate tensor representation to ensure smoothness and fuller coverage of the maps. The segmentation strategy required the MRI volumes to be upscaled to an artificial high resolution where a small partial volume label set would be valid and the segmentation process would be simplified. A resolution enhancement process (based on Salvado et al., 2006) was significantly modified to smooth homogeneous regions and sharpen their boundaries in dual-channel data. In addition, it was able to preserve the mapped thin structures’ intensities or restore them to pure tissue values. Finally, the segmentation phase employed a relaxation-based labelling optimisation process (based on Li et al., 1997) to improve accuracy, rather than more efficient greedy methods which are typically used. The thin structure location prior maps and the resolution-enhanced data also helped improve the labelling accuracy, particularly around sulci and vessels. Testing was performed on the aged LBC1936 clinical dataset and on younger brain volumes acquired at the SHEFC Brain Imaging Centre (Western General Hospital, Edinburgh, UK), as well as the BrainWeb phantom. Overall, the proposed methods rivalled and often improved segmentation accuracy compared to FAST, where the ground truth was produced by a radiologist using software designed for this project. The performance in pathological and atrophied brain volumes, and the differences with the original segmentation algorithm on which it was based (van Leemput et al., 2003), were also examined. Among the suggestions for future development include a soft labelling consensus formation framework to mitigate rater bias in the ground truth, and contour-based models of the brain parenchyma to provide additional structural constraints

    Automatic boundary extraction and rectification of bony tissue in CT images using artificial intelligence techniques

    Get PDF
    A novel approach is presented for fully automated boundary extraction and rectification of bony tissue from planar CT data. The approach extracts and rectifies feature boundary in a hierarchical fashion. It consists of a fuzzy multilevel thresholding operation, followed by a small void cleanup procedure. Then a binary morphological boundary detector is applied to extract the boundary. However, defective boundaries and undesirable artifacts may still be present. Thus two innovative anatomical knowledge based algorithms are used to remove the undesired structures and refine the erroneous boundary. Results of applying the approach on lumbar CT images are presented, with a discussion of the potential for clinical application of the approach.published_or_final_versio
    • …
    corecore