3 research outputs found

    Infant’s MRI Brain Tissue Segmentation using Integrated CNN Feature Extractor and Random Forest

    Get PDF
    Infant MRI brain soft tissue segmentation become more difficult task compare with adult MRI brain tissue segmentation, due to Infant’s brain have a very low Signal to noise ratio among the white matter_WM and the gray matter _GM. Due the fast improvement of the overall brain at this time , the overall shape and appearance of the brain differs significantly. Manual segmentation of anomalous tissues is time-consuming and unpleasant. Essential Feature extraction in traditional machine algorithm is based on experts, required prior knowledge and also system sensitivity has change. Recently, bio-medical image segmentation based on deep learning has presented significant potential in becoming an important element of the clinical assessment process. Inspired by the mentioned objective, we introduce a methodology for analysing infant image in order to appropriately segment tissue of infant MRI images. In this paper, we integrated random forest classifier along with deep convolutional neural networks (CNN) for segmentation of infants MRI of Iseg 2017 dataset. We segmented infants MRI brain images into such as WM- white matter, GM-gray matter and CSF-cerebrospinal fluid tissues, the obtained result show that the recommended integrated CNN-RF method outperforms and archives a superior DSC-Dice similarity coefficient, MHD-Modified Hausdorff distance and ASD-Average surface distance for respective segmented tissue of infants brain MRI

    CLASSIFICATION BASED ON SEMI-SUPERVISED LEARNING: A REVIEW

    Get PDF
    Semi-supervised learning is the class of machine learning that deals with the use of supervised and unsupervised learning to implement the learning process. Conceptually placed between labelled and unlabeled data. In certain cases, it enables the large numbers of unlabeled data required to be utilized in comparison with usually limited collections of labeled data. In standard classification methods in machine learning, only a labeled collection is used to train the classifier. In addition, labelled instances are difficult to acquire since they necessitate the assistance of annotators, who serve in an occupation that is identified by their label. A complete audit without a supervisor is fairly easy to do, but nevertheless represents a significant risk to the enterprise, as there have been few chances to safely experiment with it so far. By utilizing a large number of unsupervised inputs along with the supervised inputs, the semi-supervised learning solves this issue, to create a good training sample. Since semi-supervised learning requires fewer human effort and allows greater precision, both theoretically or in practice, it is of critical interest

    Learning with Low-Quality Data: Multi-View Semi-Supervised Learning with Missing Views

    Get PDF
    The focus of this thesis is on learning approaches for what we call ``low-quality data'' and in particular data in which only small amounts of labeled target data is available. The first part provides background discussion on low-quality data issues, followed by preliminary study in this area. The remainder of the thesis focuses on a particular scenario: multi-view semi-supervised learning. Multi-view learning generally refers to the case of learning with data that has multiple natural views, or sets of features, associated with it. Multi-view semi-supervised learning methods try to exploit the combination of multiple views along with large amounts of unlabeled data in order to learn better predictive functions when limited labeled data is available. However, lack of complete view data limits the applicability of multi-view semi-supervised learning to real world data. Commonly, one data view is readily and cheaply available, but additionally views may be costly or only available in some cases. This thesis work aims to make multi-view semi-supervised learning approaches more applicable to real world data specifically by addressing the issue of missing views through both feature generation and active learning, and addressing the issue of model selection for semi-supervised learning with limited labeled data. This thesis introduces a unified approach for handling missing view data in multi-view semi-supervised learning tasks, which applies to both data with completely missing additional views and data only missing views in some instances. The idea is to learn a feature generation function mapping one view to another with the mapping biased to encourage the features generated to be useful for multi-view semi-supervised learning algorithms. The mapping is then used to fill in views as pre-processing. Unlike previously proposed single-view multi-view learning approaches, the proposed approach is able to take advantage of additional view data when available, and for the case of partial view presence is the first feature-generation approach specifically designed to take into account the multi-view semi-supervised learning aspect. The next component of this thesis is the analysis of an active view completion scenario. In some tasks, it is possible to obtain missing view data for a particular instance, but with some associated cost. Recent work has shown an active selection strategy can be more effective than a random one. In this thesis, a better understanding of active approaches is sought, and it is demonstrated that the effectiveness of an active selection strategy over a random one can depend on the relationship between the views. Finally, an important component of making multi-view semi-supervised learning applicable to real world data is the task of model selection, an open problem which is often avoided entirely in previous work. For cases of very limited labeled training data the commonly used cross-validation approach can become ineffective. This thesis introduces a re-training alternative to the method-dependent approaches similar in motivation to cross-validation, that involves generating new training and test data by sampling from the large amount of unlabeled data and estimated conditional probabilities for the labels. The proposed approaches are evaluated on a variety of multi-view semi-supervised learning data sets, and the experimental results demonstrate their efficacy
    corecore