258 research outputs found

    Color image segmentation using multispectral random field texture model & color content features

    Get PDF
    This paper describes a color texture-based image segmentation system. The color texture information is obtained via modeling with the Multispectral Simultaneous Auto Regressive (MSAR) random field model. The general color content characterized by ratios of sample color means is also used. The image is segmented into regions of uniform color texture using an unsupervised histogram clustering approach that utilizes the combination of MSAR and color features. The performance of the system is tested on two databases containing synthetic mosaics of natural textures and natural scenes, respectivelyFacultad de InformĂĄtic

    Color image segmentation using multispectral random field texture model & color content features

    Get PDF
    This paper describes a color texture-based image segmentation system. The color texture information is obtained via modeling with the Multispectral Simultaneous Auto Regressive (MSAR) random field model. The general color content characterized by ratios of sample color means is also used. The image is segmented into regions of uniform color texture using an unsupervised histogram clustering approach that utilizes the combination of MSAR and color features. The performance of the system is tested on two databases containing synthetic mosaics of natural textures and natural scenes, respectivelyFacultad de InformĂĄtic

    A Kind of Affine Weighted Moment Invariants

    Full text link
    A new kind of geometric invariants is proposed in this paper, which is called affine weighted moment invariant (AWMI). By combination of local affine differential invariants and a framework of global integral, they can more effectively extract features of images and help to increase the number of low-order invariants and to decrease the calculating cost. The experimental results show that AWMIs have good stability and distinguishability and achieve better results in image retrieval than traditional moment invariants. An extension to 3D is straightforward

    Segmentation and classification of leukocytes using neural networks: a generalization direction

    Get PDF
    In image digital processing, as in other fields, it is commonly difficult to simultaneously achieve a generalizing system and a specialistic system. The segmentation and classification of leukocytes is an application where this fact is evident. First an exclusively supervised approach to segmentation and classification of blood white cells images is shown. As this method produces some drawbacks related to the specialistic/generalized problems, another process formed by two neural networks is proposed. One is an unsupervised network and the other one is a supervised neural network. The goal is to achieve a better generalizing system while still doing well the role of a specialistic system. We will compare the performance of the two approaches

    Predicting software project effort: A grey relational analysis based method

    Get PDF
    This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.The inherent uncertainty of the software development process presents particular challenges for software effort prediction. We need to systematically address missing data values, outlier detection, feature subset selection and the continuous evolution of predictions as the project unfolds, and all of this in the context of data-starvation and noisy data. However, in this paper, we particularly focus on outlier detection, feature subset selection, and effort prediction at an early stage of a project. We propose a novel approach of using grey relational analysis (GRA) from grey system theory (GST), which is a recently developed system engineering theory based on the uncertainty of small samples. In this work we address some of the theoretical challenges in applying GRA to outlier detection, feature subset selection, and effort prediction, and then evaluate our approach on five publicly available industrial data sets using both stepwise regression and Analogy as benchmarks. The results are very encouraging in the sense of being comparable or better than other machine learning techniques and thus indicate that the method has considerable potential.National Natural Science Foundation of Chin

    Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation

    Get PDF
    Missing data is a widespread problem that can affect the ability to use data to construct effective prediction systems. We investigate a common machine learning technique that can tolerate missing values, namely C4.5, to predict cost using six real world software project databases. We analyze the predictive performance after using the k-NN missing data imputation technique to see if it is better to tolerate missing data or to try to impute missing values and then apply the C4.5 algorithm. For the investigation, we simulated three missingness mechanisms, three missing data patterns, and five missing data percentages. We found that the k-NN imputation can improve the prediction accuracy of C4.5. At the same time, both C4.5 and k-NN are little affected by the missingness mechanism, but that the missing data pattern and the missing data percentage have a strong negative impact upon prediction (or imputation) accuracy particularly if the missing data percentage exceeds 40%

    Fast automated cell phenotype image classification

    Get PDF
    BACKGROUND: The genomic revolution has led to rapid growth in sequencing of genes and proteins, and attention is now turning to the function of the encoded proteins. In this respect, microscope imaging of a protein's sub-cellular localisation is proving invaluable, and recent advances in automated fluorescent microscopy allow protein localisations to be imaged in high throughput. Hence there is a need for large scale automated computational techniques to efficiently quantify, distinguish and classify sub-cellular images. While image statistics have proved highly successful in distinguishing localisation, commonly used measures suffer from being relatively slow to compute, and often require cells to be individually selected from experimental images, thus limiting both throughput and the range of potential applications. Here we introduce threshold adjacency statistics, the essence which is to threshold the image and to count the number of above threshold pixels with a given number of above threshold pixels adjacent. These novel measures are shown to distinguish and classify images of distinct sub-cellular localization with high speed and accuracy without image cropping. RESULTS: Threshold adjacency statistics are applied to classification of protein sub-cellular localization images. They are tested on two image sets (available for download), one for which fluorescently tagged proteins are endogenously expressed in 10 sub-cellular locations, and another for which proteins are transfected into 11 locations. For each image set, a support vector machine was trained and tested. Classification accuracies of 94.4% and 86.6% are obtained on the endogenous and transfected sets, respectively. Threshold adjacency statistics are found to provide comparable or higher accuracy than other commonly used statistics while being an order of magnitude faster to calculate. Further, threshold adjacency statistics in combination with Haralick measures give accuracies of 98.2% and 93.2% on the endogenous and transfected sets, respectively. CONCLUSION: Threshold adjacency statistics have the potential to greatly extend the scale and range of applications of image statistics in computational image analysis. They remove the need for cropping of individual cells from images, and are an order of magnitude faster to calculate than other commonly used statistics while providing comparable or better classification accuracy, both essential requirements for application to large-scale approaches
    • 

    corecore