1,459 research outputs found

    Accurate detection of dysmorphic nuclei using dynamic programming and supervised classification

    Get PDF
    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows

    An effective feature extraction method for rice leaf disease classification

    Get PDF
    Our society is getting more and more technology dependent day by day. Nevertheless, agriculture is imperative for our survival. Rice is one of the primary food grains. It provides sustenance to almost fifty percent of the world population and promotes huge amount of employments. Hence, proper mitigation of rice plant diseases is of paramount importance. A model to detect three rice leaf diseases, namely bacterial leaf blight, brown spot, and leaf smut is proposed in this paper. Backgrounds of the images are removed by saturation threshold while disease affected areas are segmented using hue threshold. Distinctive features from color, shape, and texture domain are extracted from affected areas. These features can robustly describe local and global statistics of such images. Trying a couple of classification algorithms, extreme gradient boosting decision tree ensemble is incorporated in this model for its superior performance. Our model achieves 86.58% accuracy on rice leaf diseases dataset from UCI, which is higher than previous works on the same dataset. Class-wise accuracy of the model is also consistent among the classes

    Interpreting Deep Visual Representations via Network Dissection

    Full text link
    The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. However, CNNs often criticized as being black boxes that lack interpretability, since they have millions of unexplained model parameters. In this work, we describe Network Dissection, a method that interprets networks by providing labels for the units of their deep visual representations. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors. The method reveals that deep representations are more transparent and interpretable than expected: we find that representations are significantly more interpretable than they would be under a random equivalently powerful basis. We apply the method to interpret and compare the latent representations of various network architectures trained to solve different supervised and self-supervised training tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initializations, and the network depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a prediction given by a CNN for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into their hierarchical structure.Comment: *B. Zhou and D. Bau contributed equally to this work. 15 pages, 27 figure

    An investigation of the breast cancer classification using various machine learning techniques

    Get PDF
    It is an extremely cumbersome process to predict a disease based on the visual diagnosis of cell type with precision or accuracy, especially when multiple features are associated. Cancer is one such example where the phenomenon is very complex and also multiple features of cell types are involved. Breast cancer is a disease mostly affects female population and the number of affected people is highest among all cancer types in India. In the present investigation, various pattern recognition techniques were used for the classification of breast cancer using cell image processing. Under these pattern recognition techniques, cell image segmentation, texture based image feature extraction and subsequent classification of breast cancer cells was successfully performed. When four different machine learning techniques: Kth nearest neighbor (KNN), Artificial Neural Network ( ANN), Support Vector Machine (SVM) and Least Square Support Vector Machine (LS-SVM) was used to classify 81 cell images, it was observed from the results that the LS-SVM with both Radial Basis Function (RBF) and linear kernel classifiers demonstrated the highest classification rate of 95.3488% among four other classifiers while SVM with linear kernel resulted a classification rate of 93.02% which was close to LSSVM classifier. Thus, it was demonstrated that the LS-SVM classifier showed accuracy higher than other classifiers reported so far. Moreover, our classifier can classify the disease in a short period of time using only cell images unlike other approaches reported so far

    Image Automatic Categorisation using Selected Features Attained from Integrated Non-Subsampled Contourlet with Multiphase Level Sets

    Get PDF
    A framework of automatic detection and categorization of Breast Cancer (BC) biopsy images utilizing significant interpretable features is initially considered in discussed work. Appropriate efficient techniques are engaged in layout steps of the discussed framework. Different steps include 1.To emphasize the edge particulars of tissue structure; the distinguished Non-Subsampled Contourlet (NSC) transform is implemented. 2. For the demarcation of cells from background, k-means, Adaptive Size Marker Controlled Watershed, two proposed integrated methodologies were discussed. Proposed Method-II, an integrated approach of NSC and Multiphase Level Sets is preferred to other segmentation practices as it proves better performance 3. In feature extraction phase, extracted 13 shape morphology, 33 textural (includes 6 histogram, 22 Haralick’s, 3 Tamura’s, 2 Graylevel Run-Length Matrix,) and 2 intensity features from partitioned tissue images for 96 trained image

    Machine learning methods for histopathological image analysis

    Full text link
    Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.Comment: 23 pages, 4 figure

    New data model for graph-cut segmentation: application to automatic melanoma delineation

    No full text
    International audienceWe propose a new data model for graph-cut image segmentation, defined according to probabilities learned by a classification process. Unlike traditional graph-cut methods, the data model takes into account not only color but also texture and shape information. For melanoma images, we also introduce skin chromophore features and automatically derive "seed" pixels used to train the classifier from a coarse initial segmentation. On natural images, our method successfully segments objects having similar color but different texture. Its application to melanoma delineation compares favorably to manual delineation and related graph-cut segmentation methods

    Evaluating the role of context in 3D theater stage reconstruction

    Get PDF
    2014 Fall.Includes bibliographical references.Recovering the 3D structure from 2D images is a problem dating back to the 1960s. It is only recently, with the advancement of computing technology, that there has been substantial progress in solving this problem. In this thesis, we focus on one method for recovering scene structure given a single image. This method uses supervised learning techniques and a multiple-segmentation framework for adding contextual information to the inference. We evaluate the effect of this added contextual information by excluding this additional information to measure system performance. We then go on to evaluate the effect of the other system components that remain which include classifiers and image features. For example, in the case of classifiers, we substitute the original with others to see the level of accuracy that these provide. In the case of the features, we conduct experiments that give us the most important features that contribute to classification accuracy. All of this put together lets us evaluate the effect of adding contextual information to the learning process and if it can be improved by improving the other non-contextual components of the system
    corecore