14 research outputs found

    Color image segmentation using a self-initializing EM algorithm

    Get PDF
    This paper presents a new method based on the Expectation-Maximization (EM) algorithm that we apply for color image segmentation. Since this algorithm partitions the data based on an initial set of mixtures, the color segmentation provided by the EM algorithm is highly dependent on the starting condition (initialization stage). Usually the initialization procedure selects the color seeds randomly and often this procedure forces the EM algorithm to converge to numerous local minima and produce inappropriate results. In this paper we propose a simple and yet effective solution to initialize the EM algorithm with relevant color seeds. The resulting self initialised EM algorithm has been included in the development of an adaptive image segmentation scheme that has been applied to a large number of color images. The experimental data indicates that the refined initialization procedure leads to improved color segmentation

    Tumor Size Processing using Smart Phone

    Get PDF
    This paper presents the tumor processing from MRI images using the computational features available on a mobile device (smart phones). The MRI images are pre-processed using dithering and median filtering and then transmitted to the mobile computing device. Dithering which converts gray scale image to black and white image but with gray scale visual rendition leads to the reduction in the size of the image being transmitted. The dithered images are filtered using median filter to improve the PSNR. From these transmitted images the Region of Interest (ROI) is selected using image measurement application present(in built) in mobile device. Tumor size is so computed is compared with that of obtained from existing automated algorithms. DOI: 10.17762/ijritcc2321-8169.15027

    Dithered Color Quantization

    Get PDF
    Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches

    Colour Texture analysis

    Get PDF
    This chapter presents a novel and generic framework for image segmentation using a compound image descriptor that encompasses both colour and texture information in an adaptive fashion. The developed image segmentation method extracts the texture information using low-level image descriptors (such as the Local Binary Patterns (LBP)) and colour information by using colour space partitioning. The main advantage of this approach is the analysis of the textured images at a micro-level using the local distribution of the LBP values, and in the colour domain by analysing the local colour distribution obtained after colour segmentation. The use of the colour and texture information separately has proven to be inappropriate for natural images as they are generally heterogeneous with respect to colour and texture characteristics. Thus, the main problem is to use the colour and texture information in a joint descriptor that can adapt to the local properties of the image under analysis. We will review existing approaches to colour and texture analysis as well as illustrating how our approach can be successfully applied to a range of applications including the segmentation of natural images, medical imaging and product inspection

    Efficient, edge-aware, combined color quantization and dithering

    Get PDF
    Abstract—In this paper we present a novel algorithm to simultaneously accomplish color quantization and dithering of images. This is achieved by minimizing a perception-based cost function which considers pixel-wise differences between filtered versions of the quantized image and the input image. We use edge aware filters in defining the cost function to avoid mixing colors on opposite sides of an edge. The importance of each pixel is weighted according to its saliency. To rapidly minimize the cost function, we use a modified multi-scale iterative conditional mode (ICM) algorithm which updates one pixel a time while keeping other pixels unchanged. As ICM is a local method, careful initialization is required to prevent termination at a local minimum far from the global one. To address this problem, we initialize ICM with a palette generated by a modified median-cut method. Compared to previous approaches, our method can produce high quality results with fewer visual artifacts but also requires significantly less computational effort. Index Terms—Color quantization, dithering, optimization-based image processing. I

    A General scheme for dithering multidimensional signals, and a visual instance of encoding images with limited palettes

    Get PDF
    AbstractThe core contribution of this paper is to introduce a general neat scheme based on soft vector clustering for the dithering of multidimensional signals that works in any space of arbitrary dimensionality, on arbitrary number and distribution of quantization centroids, and with a computable and controllable quantization noise. Dithering upon the digitization of one-dimensional and multi-dimensional signals disperses the quantization noise over the frequency domain which renders it less perceptible by signal processing systems including the human cognitive ones, so it has a very beneficial impact on vital domains such as communications, control, machine-learning, etc. Our extensive surveys have concluded that the published literature is missing such a neat dithering scheme. It is very desirable and insightful to visualize the behavior of our multidimensional dithering scheme; especially the dispersion of quantization noise over the frequency domain. In general, such visualization would be quite hard to achieve and perceive by the reader unless the target multidimensional signal itself is directly perceivable by humans. So, we chose to apply our multidimensional dithering scheme upon encoding true-color images – that are 3D signals – with palettes of limited sets of colors to show how it minimizes the visual distortions – esp. contouring effect – in the encoded images

    Lossless compression of images with specific characteristics

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaA compressão de certos tipos de imagens é um desafio para algumas normas de compressão de imagem. Esta tese investiga a compressão sem perdas de imagens com características especiais, em particular imagens simples, imagens de cor indexada e imagens de microarrays. Estamos interessados no desenvolvimento de métodos de compressão completos e no estudo de técnicas de pré-processamento que possam ser utilizadas em conjunto com as normas de compressão de imagem. A esparsidade do histograma, uma propriedade das imagens simples, é um dos assuntos abordados nesta tese. Desenvolvemos uma técnica de pré-processamento, denominada compactação de histogramas, que explora esta propriedade e que pode ser usada em conjunto com as normas de compressão de imagem para um melhoramento significativo da eficiência de compressão. A compactação de histogramas e os algoritmos de reordenação podem ser usados como préprocessamento para melhorar a compressão sem perdas de imagens de cor indexada. Esta tese apresenta vários algoritmos e um estudo abrangente dos métodos já existentes. Métodos específicos, como é o caso da decomposição em árvores binárias, são também estudados e propostos. O uso de microarrays em biologia encontra-se em franca expansão. Devido ao elevado volume de dados gerados por experiência, são necessárias técnicas de compressão sem perdas. Nesta tese, exploramos a utilização de normas de compressão sem perdas e apresentamos novos algoritmos para codificar eficientemente este tipo de imagens, baseados em modelos de contexto finito e codificação aritmética.The compression of some types of images is a challenge for some standard compression techniques. This thesis investigates the lossless compression of images with specific characteristics, namely simple images, color-indexed images and microarray images. We are interested in the development of complete compression methods and in the study of preprocessing algorithms that could be used together with standard compression methods. The histogram sparseness, a property of simple images, is addressed in this thesis. We developed a preprocessing technique, denoted histogram packing, that explores this property and can be used with standard compression methods for improving significantly their efficiency. Histogram packing and palette reordering algorithms can be used as a preprocessing step for improving the lossless compression of color-indexed images. This thesis presents several algorithms and a comprehensive study of the already existing methods. Specific compression methods, such as binary tree decomposition, are also addressed. The use of microarray expression data in state-of-the-art biology has been well established and due to the significant volume of data generated per experiment, efficient lossless compression methods are needed. In this thesis, we explore the use of standard image coding techniques and we present new algorithms to efficiently compress this type of images, based on finite-context modeling and arithmetic coding

    A perceptual learning model to discover the hierarchical latent structure of image collections

    Get PDF
    Biology has been an unparalleled source of inspiration for the work of researchers in several scientific and engineering fields including computer vision. The starting point of this thesis is the neurophysiological properties of the human early visual system, in particular, the cortical mechanism that mediates learning by exploiting information about stimuli repetition. Repetition has long been considered a fundamental correlate of skill acquisition andmemory formation in biological aswell as computational learning models. However, recent studies have shown that biological neural networks have differentways of exploiting repetition in forming memory maps. The thesis focuses on a perceptual learning mechanism called repetition suppression, which exploits the temporal distribution of neural activations to drive an efficient neural allocation for a set of stimuli. This explores the neurophysiological hypothesis that repetition suppression serves as an unsupervised perceptual learning mechanism that can drive efficient memory formation by reducing the overall size of stimuli representation while strengthening the responses of the most selective neurons. This interpretation of repetition is different from its traditional role in computational learning models mainly to induce convergence and reach training stability, without using this information to provide focus for the neural representations of the data. The first part of the thesis introduces a novel computational model with repetition suppression, which forms an unsupervised competitive systemtermed CoRe, for Competitive Repetition-suppression learning. The model is applied to generalproblems in the fields of computational intelligence and machine learning. Particular emphasis is placed on validating the model as an effective tool for the unsupervised exploration of bio-medical data. In particular, it is shown that the repetition suppression mechanism efficiently addresses the issues of automatically estimating the number of clusters within the data, as well as filtering noise and irrelevant input components in highly dimensional data, e.g. gene expression levels from DNA Microarrays. The CoRe model produces relevance estimates for the each covariate which is useful, for instance, to discover the best discriminating bio-markers. The description of the model includes a theoretical analysis using Huber’s robust statistics to show that the model is robust to outliers and noise in the data. The convergence properties of themodel also studied. It is shown that, besides its biological underpinning, the CoRe model has useful properties in terms of asymptotic behavior. By exploiting a kernel-based formulation for the CoRe learning error, a theoretically sound motivation is provided for the model’s ability to avoid local minima of its loss function. To do this a necessary and sufficient condition for global error minimization in vector quantization is generalized by extending it to distance metrics in generic Hilbert spaces. This leads to the derivation of a family of kernel-based algorithms that address the local minima issue of unsupervised vector quantization in a principled way. The experimental results show that the algorithm can achieve a consistent performance gain compared with state-of-the-art learning vector quantizers, while retaining a lower computational complexity (linear with respect to the dataset size). Bridging the gap between the low level representation of the visual content and the underlying high-level semantics is a major research issue of current interest. The second part of the thesis focuses on this problem by introducing a hierarchical and multi-resolution approach to visual content understanding. On a spatial level, CoRe learning is used to pool together the local visual patches by organizing them into perceptually meaningful intermediate structures. On the semantical level, it provides an extension of the probabilistic Latent Semantic Analysis (pLSA) model that allows discovery and organization of the visual topics into a hierarchy of aspects. The proposed hierarchical pLSA model is shown to effectively address the unsupervised discovery of relevant visual classes from pictorial collections, at the same time learning to segment the image regions containing the discovered classes. Furthermore, by drawing on a recent pLSA-based image annotation system, the hierarchical pLSA model is extended to process and representmulti-modal collections comprising textual and visual data. The results of the experimental evaluation show that the proposed model learns to attach textual labels (available only at the level of the whole image) to the discovered image regions, while increasing the precision/ recall performance with respect to flat, pLSA annotation model
    corecore