14 research outputs found

    Guided interactive image segmentation using machine learning and color based data set clustering

    Get PDF
    We present a novel approach that combines machine learning based interactive image segmentation using supervoxels with a clustering method for the automated identification of similarly colored images in large data sets which enables a guided reuse of classifiers. Our approach solves the problem of significant color variability prevalent and often unavoidable in biological and medical images which typically leads to deteriorated segmentation and quantification accuracy thereby greatly reducing the necessary training effort. This increase in efficiency facilitates the quantification of much larger numbers of images thereby enabling interactive image analysis for recent new technological advances in high-throughput imaging. The presented methods are applicable for almost any image type and represent a useful tool for image analysis tasks in general

    Dynamically balanced online random forests for interactive scribble-based segmentation

    Get PDF
    Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles,the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems,with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio,while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation

    Guided interactive image segmentation using machine learning and color-based image set clustering

    Get PDF
    International audienceOver the last decades, image processing and analysis has become one of the key technologies in systems biology and medicine. The quantification of anatomical structures and dynamic processes in living systems is essential for understanding the complex underlying mechanisms and allows, i.a., the construction of spatio-temporal models that illuminate the interplay between architecture and function. Recently, deep learning significantly improved the performance of traditional image analysis in cases where imaging techniques provide large amounts of data. However, if only few images are available or qualified annotations are expensive to produce, the applicability of deep learning is still limited.We present a novel approach that combines machine learning based interactive image segmentation using supervoxels with a clustering method for the automated identification of similarly colored images in large image sets which enables a guided reuse of interactively trained classifiers. Our approach solves the problem of deteriorated segmentation and quantification accuracy when reusing trained classifiers which is due to significant color variability prevalent and often unavoidable in biological and medical images. This increase in efficiency improves the suitability of interactive segmentation for larger image sets, enabling efficient quantification or the rapid generation of training data for deep learning with minimal effort. The presented methods are applicable for almost any image type and represent a useful tool for image analysis tasks in general.The provided free software TiQuant makes the presented methods easily and readily usable and can be downloaded at tiquant.hoehme.com

    Feature Decoupling-Recycling Network for Fast Interactive Segmentation

    Full text link
    Recent interactive segmentation methods iteratively take source image, user guidance and previously predicted mask as the input without considering the invariant nature of the source image. As a result, extracting features from the source image is repeated in each interaction, resulting in substantial computational redundancy. In this work, we propose the Feature Decoupling-Recycling Network (FDRN), which decouples the modeling components based on their intrinsic discrepancies and then recycles components for each user interaction. Thus, the efficiency of the whole interactive process can be significantly improved. To be specific, we apply the Decoupling-Recycling strategy from three perspectives to address three types of discrepancies, respectively. First, our model decouples the learning of source image semantics from the encoding of user guidance to process two types of input domains separately. Second, FDRN decouples high-level and low-level features from stratified semantic representations to enhance feature learning. Third, during the encoding of user guidance, current user guidance is decoupled from historical guidance to highlight the effect of current user guidance. We conduct extensive experiments on 6 datasets from different domains and modalities, which demonstrate the following merits of our model: 1) superior efficiency than other methods, particularly advantageous in challenging scenarios requiring long-term interactions (up to 4.25x faster), while achieving favorable segmentation performance; 2) strong applicability to various methods serving as a universal enhancement technique; 3) well cross-task generalizability, e.g., to medical image segmentation, and robustness against misleading user guidance.Comment: Accepted to ACM MM 202

    Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion

    Get PDF
    We propose a novel multi-atlas based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxel-wise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three data sets

    Evaluation Methods of Accuracy and Reproducibility for Image Segmentation Algorithms

    Get PDF
    Segmentation algorithms perform different on differernt datasets. Sometimes we want to learn which segmentation algoirithm is the best for a specific task, therefore we need to rank the performance of segmentation algorithms and determine which one is most suitable to that task. The performance of segmentation algorithms can be characterized from many aspects, such as accuracy and reproducibility. In many situations, the mean of the accuracies of individual segmentations is regarded as the accuracy of the segmentation algorithm which generated these segmentations. Sometimes a new algorithm is proposed and argued to be best based on mean accuracy of segmentations only, but the distribution of accuracies of segmentations generated by the new segmentation algorithm may not be really better than that of other exist segmentation algorithms. There are some cases where two groups of segmentations have the same mean of accuracies but have different distributions. This indicates that even if the mean accuracies of two group of segmentations are the same, the corresponding segmentations may have different accuracy performances. In addition, the reproducibility of segmentation algorithms are measured by many different metrics. But few works compared the properties of reproducibility measures basing on real segmentation data. In this thesis, we illustrate how to evaluate and compare the accuracy performances of segmentation algorithms using a distribution-based method, as well as how to use the proposed extensive method to rank multiple segmentation algorithms according to their accuracy performances. Different from the standard method, our extensive method combines the distribution information with the mean accuracy to evaluate, compare, and rank the accuracy performance of segmentation algorithms, instead of using mean accuracy alone. In addition, we used two sets of real segmentation data to demonstrate that generalized Tanimoto coefficient is a superior reproducibility measure which is insensitive to segmentation group size (number of raters), while other popular measures of reproducibility exhibit sensitivity to group size

    Recent trends in intelligent data analysis

    Get PDF
    Intelligent Data Analysis deals with the visualization, pre-processing, pattern recognition and knowledge discovery tools and applications using various computational intelligence techniques. The ten papers included in this special issue represent a selection of extended contributions presented at the 6th International Conference on Hybrid Artificial Intelligence Systems (HAIS), held in Wroclaw, Poland, during May 2011. The papers discuss the development of new data analysis methodologies with a focus on neurocomputing and connectionist learning. Articles were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is aimed at practitioners, researchers and post-graduate students, who are engaged in developing and applying, advanced Hybrid Artificial Intelligence Systems from a theoretical point of view and also to solve real-world problems. The papers are organized as follows. Martínez et al. in the first contribution apply principal component analysis for quantitative association rules' quality. From this analysis, a reduced subset of measures is selected to be included in the fitness function in order to obtain better values for the whole set of quality measures, and not only for those included in the fitness function. This is a general-purpose methodology and can, therefore, be applied to the fitness function of any algorithm. The second contribution by López et al. introduces the usage of the Iterative Instance Adjustment for Imbalanced Domains. It is an evolutionary optimization based framework, which uses an instance generation technique, designed to face the existing imbalance modifying the original training set. The method iteratively learns the appropriate number of examples that represent the classes and their particular positioning. The learning process contains three key operations in its design: a customized initialization procedure, an evolutionary optimization of the positioning of the examples and a selection of the most representative examples for each class. Lysiak et al. in the sequel propose a new probabilistic model using measures of classifier competence and diversity. The multiple classifier system based on the dynamic ensemble selection scheme was constructed using both developed measures. Two different optimization problems of ensemble selection are defined and a solution based on the simulated annealing algorithm is presented. The influence of minimum value of competence and diversity in the ensemble on classification performance was investigated. In the fourth contribution, Krawczyk and Wozniak deal with the problem of designing combined recognition system based on a pool of individual one-class classifiers. The authors proposed a new model dedicated to the one-class classification and also introduced a novel diversity measures dedicated to it. The proposed model of a one-class classifier committee may be used for single-class and multi-class classification tasks. In the sequel Cano et al. propose a parallel evaluation model of rules and rule sets on GPUs based on the NVIDIA CUDA programming model, which significantly allows reducing the run-time and speeding up the algorithm. The GPU model achieved a rule interpreter performance of up to 64 billion operations per second and provides a significant advantage to deal complex problems where the CPU run-time is not acceptable. Martínez-Murcia et al. in the sixth paper ilustrate a new CAD system based on pre-processing, voxel selection, feature extraction and classification of the images. After pre-processing of the images, voxels are ranked by means of their significance in class discrimination, and the first N are selected. Then, these voxels are modeled using Independent Component Analysis (ICA), obtaining a few components that represent each image, which is used later to train a classifier. In the seventh paper, Maiora et al. provide an active Learning based interactive image segmentation system, which allows quick volume segmentation, with minimal intervention of a human operator. Image segmentation is achieved by a Random Forest (RF) classifier applied on a set of image features extracted from each voxel and its neighborhood. An initial set of labeled voxels is required to start the process, training an initial RF. The most uncertain unlabeled voxels are shown to the human operator to select some of them for inclusion in the training set, retraining the RF classifier. The following contribution by Cyganek and Gruszczynski presents a hybrid visual system for monitoring driver's states of fatigue, sleepiness and in-attention based on driver's eye recognition. Safe operation in car conditions and processing in daily and night conditions are obtained using a custom setup of two cameras operating in the visible and near infrared spectra, respectively. In each of those spectra images a cascade of two classifiers performs processing. The first classifier is responsible for detection of eye regions based on the proposed eye models specific to each spectrum. The second classifier in each cascade is responsible for eye verification. It is based on the higher order singular value decomposition of the tensors of geometrically deformed versions of real eye prototypes, specific to the visible and NIR spectra, respectively. In the sequel, Calvo-Rolle and Corchado present a novel bio-inspired knowledge system, based on closed loop tuning, for calculating the Proportional-Integral-Derivative (PID) controller parameters of a real combined cycle plant. The aim is to automatically achieve the best parameters according to the work point and the dynamics of the plant. In the last contribution Lee and Cho presented a method to recognize a person's activities from sensors in a mobile phone using mixture-of-experts (ME) model. In order to train the ME model, the authors have applied global-local co-training algorithm with both labeled and unlabeled data to improve the performance. We would like to thank our peer-reviewers for their diligent work and efficient efforts. We are also grateful to the Editor-in-Chief of Neurocomputing, Prof. Tom Heskes, for his continued support for the HAIS conference and for the opportunity to organize this Special issue
    corecore