120 research outputs found
Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing
In hyperspectral images, some spectral bands suffer from low signal-to-noise
ratio due to noisy acquisition and atmospheric effects, thus requiring robust
techniques for the unmixing problem. This paper presents a robust supervised
spectral unmixing approach for hyperspectral images. The robustness is achieved
by writing the unmixing problem as the maximization of the correntropy
criterion subject to the most commonly used constraints. Two unmixing problems
are derived: the first problem considers the fully-constrained unmixing, with
both the non-negativity and sum-to-one constraints, while the second one deals
with the non-negativity and the sparsity-promoting of the abundances. The
corresponding optimization problems are solved efficiently using an alternating
direction method of multipliers (ADMM) approach. Experiments on synthetic and
real hyperspectral images validate the performance of the proposed algorithms
for different scenarios, demonstrating that the correntropy-based unmixing is
robust to outlier bands.Comment: 23 page
Sparse feature learning for image analysis in segmentation, classification, and disease diagnosis.
The success of machine learning algorithms generally depends on intermediate data representation, called features that disentangle the hidden factors of variation in data. Moreover, machine learning models are required to be generalized, in order to reduce the specificity or bias toward the training dataset. Unsupervised feature learning is useful in taking advantage of large amount of unlabeled data, which is available to capture these variations. However, learned features are required to capture variational patterns in data space. In this dissertation, unsupervised feature learning with sparsity is investigated for sparse and local feature extraction with application to lung segmentation, interpretable deep models, and Alzheimer\u27s disease classification. Nonnegative Matrix Factorization, Autoencoder and 3D Convolutional Autoencoder are used as architectures or models for unsupervised feature learning. They are investigated along with nonnegativity, sparsity and part-based representation constraints for generalized and transferable feature extraction
Collaborative Summarization of Topic-Related Videos
Large collections of videos are grouped into clusters by a topic keyword,
such as Eiffel Tower or Surfing, with many important visual concepts repeating
across them. Such a topically close set of videos have mutual influence on each
other, which could be used to summarize one of them by exploiting information
from others in the set. We build on this intuition to develop a novel approach
to extract a summary that simultaneously captures both important
particularities arising in the given video, as well as, generalities identified
from the set of videos. The topic-related videos provide visual context to
identify the important parts of the video being summarized. We achieve this by
developing a collaborative sparse optimization method which can be efficiently
solved by a half-quadratic minimization algorithm. Our work builds upon the
idea of collaborative techniques from information retrieval and natural
language processing, which typically use the attributes of other similar
objects to predict the attribute of a given object. Experiments on two
challenging and diverse datasets well demonstrate the efficacy of our approach
over state-of-the-art methods.Comment: CVPR 201
Nuclei segmentation using level set method and data fusion for the CIN classification
This paper deals with the automation of the detection of the cervical cancer through histology images. This process is divided into two parts, corresponding to segmentation and data fusion. The segmentation and classification of the cervical epithelium images is done using hybrid image processing techniques. The digitized histology images provided have a pre-cervical cancer condition called cervical intraepithelial neoplasia (CIN) by expert pathologists. Previously, image analysis studies focused on nuclei-level features to classify the epithelium into the CIN grades. The current study focuses on nuclei segmentation based on the level set segmentation and fuzzy c-means clustering methods. Morphological post-processing operations are used to smooth the image and to remove non-nuclei objects. This algorithm is evaluated on a 71-image dataset of digitized histology images for nuclei segmentation. Experimental results showed a nuclei detection accuracy of 99.53 percent. The second section of this thesis deals with the fusion of the 117 CIN features obtained after processing the input cervical images. Various data fusion techniques are tested using machine learning tools. For further research, the best algorithm from Weka is chosen --Abstract, page iv
Brain Image Segmentation Based on Fuzzy Clustering
The segmentation performance is topic to suitable initialization and best configuration of supervisory parameters. In medical image segmentation, the segmentation is very important when the diagnosing becomes very hard in medical images which are not properly illuminated.
This paper proposes segmentation of brain tumour image of MRI images based on spatial fuzzy clustering and level set algorithm. After performance evaluation of the proposed algorithm was carried on brain tumour images, the results showed confirm its effectiveness for medical image segmentation, where the brain tumour is detected properly
Kernel Truncated Regression Representation for Robust Subspace Clustering
Subspace clustering aims to group data points into multiple clusters of which
each corresponds to one subspace. Most existing subspace clustering approaches
assume that input data lie on linear subspaces. In practice, however, this
assumption usually does not hold. To achieve nonlinear subspace clustering, we
propose a novel method, called kernel truncated regression representation. Our
method consists of the following four steps: 1) projecting the input data into
a hidden space, where each data point can be linearly represented by other data
points; 2) calculating the linear representation coefficients of the data
representations in the hidden space; 3) truncating the trivial coefficients to
achieve robustness and block-diagonality; and 4) executing the graph cutting
operation on the coefficient matrix by solving a graph Laplacian problem. Our
method has the advantages of a closed-form solution and the capacity of
clustering data points that lie on nonlinear subspaces. The first advantage
makes our method efficient in handling large-scale datasets, and the second one
enables the proposed method to conquer the nonlinear subspace clustering
challenge. Extensive experiments on six benchmarks demonstrate the
effectiveness and the efficiency of the proposed method in comparison with
current state-of-the-art approaches.Comment: 14 page
Image Segmentation of Cows using Thresholding and K-Means Method
Cow's weight parameter depends on the characteristics and size of the cow's body. This system aims to segment body parts of cows using thresholding and K-Means method to produce cow body extraction as an early stage in the process of estimating cow's weight. The thresholding method begins by inputting a digital image then performing a sharpened grayscale process with edge detection and dilation processes. As a comparison, segmentation with K-Means method would segment the image into two (2) clusters. The results showed better segmentation of cow's body with local thresholding method than the other two methods
- …