70 research outputs found
App Review Analysis via Active Learning: Reducing Supervision Effort Without Compromising Classification Accuracy
Automated app review analysis is an important avenue for extracting a variety of requirements-related information. Typically, a first step toward performing such analysis is preparing a training dataset, where developers(experts) identify a set of reviews and, manually, annotate them according to a given task. Having sufficiently large training data is important for both achieving a high prediction accuracy and avoiding over-fitting. Given millions of reviews, preparing a training set is laborious.We propose to incorporate active learning, a machine learning paradigm,in order to reduce the human effort involved in app review analysis. Our app review classification framework exploits three active learning strategies based on uncertainty sampling. We apply these strategies to an existing dataset of 4,400 app reviews for classifying app reviews as features, bugs, rating, and user experience. We find that active learning, compared to a training dataset chosen randomly, yields a significantly higher prediction accuracy under multiple scenarios
Text segmentation on multilabel documents: A distant-supervised approach
Segmenting text into semantically coherent segments is an important task with
applications in information retrieval and text summarization. Developing
accurate topical segmentation requires the availability of training data with
ground truth information at the segment level. However, generating such labeled
datasets, especially for applications in which the meaning of the labels is
user-defined, is expensive and time-consuming. In this paper, we develop an
approach that instead of using segment-level ground truth information, it
instead uses the set of labels that are associated with a document and are
easier to obtain as the training data essentially corresponds to a multilabel
dataset. Our method, which can be thought of as an instance of distant
supervision, improves upon the previous approaches by exploiting the fact that
consecutive sentences in a document tend to talk about the same topic, and
hence, probably belong to the same class. Experiments on the text segmentation
task on a variety of datasets show that the segmentation produced by our method
beats the competing approaches on four out of five datasets and performs at par
on the fifth dataset. On the multilabel text classification task, our method
performs at par with the competing approaches, while requiring significantly
less time to estimate than the competing approaches.Comment: Accepted in 2018 IEEE International Conference on Data Mining (ICDM
Hierarchical Network with Label Embedding for Contextual Emotion Recognition
Emotion recognition has been used widely in various applications such as mental health monitoring and emotional management. Usually, emotion recognition is regarded as a text classification task. Emotion recognition is a more complex problem, and the relations of emotions expressed in a text are nonnegligible. In this paper, a hierarchical model with label embedding is proposed for contextual emotion recognition. Especially, a hierarchical model is utilized to learn the emotional representation of a given sentence based on its contextual information. To give emotion correlation-based recognition, a label embedding matrix is trained by joint learning, which contributes to the final prediction. Comparison experiments are conducted on Chinese emotional corpus RenCECps, and the experimental results indicate that our approach has a satisfying performance in textual emotion recognition task
Doctor of Philosophy
dissertationLatent structures play a vital role in many data analysis tasks. By providing compact yet expressive representations, such structures can offer useful insights into the complex and high-dimensional datasets encountered in domains such as computational biology, computer vision, natural language processing, etc. Specifying the right complexity of these latent structures for a given problem is an important modeling decision. Instead of using models with an a priori fixed complexity, it is desirable to have models that can adapt their complexity as the data warrant. Nonparametric Bayesian models are motivated precisely based on this desideratum by offering a flexible modeling paradigm for data without limiting the model-complexity a priori. The flexibility comes from the model's ability to adjust its complexity adaptively with data. This dissertation is about nonparametric Bayesian learning of two specific types of latent structures: (1) low-dimensional latent features underlying high-dimensional observed data where the latent features could exhibit interdependencies, and (2) latent task structures that capture how a set of learning tasks relate with each other, a notion critical in the paradigm of Multitask Learning where the goal is to solve multiple learning tasks jointly in order to borrow information across similar tasks. Another focus of this dissertation is on designing efficient approximate inference algorithms for nonparametric Bayesian models. Specifically, for the nonparametric Bayesian latent feature model where the goal is to infer the binary-valued latent feature assignment matrix for a given set of observations, the dissertation proposes two approximate inference methods. The first one is a search-based algorithm to find the maximum-a-posteriori (MAP) solution for the latent feature assignment matrix. The second one is a sequential Monte-Carlo-based approximate inference algorithm that allows processing the data oneexample- at-a-time while being space-efficient in terms of the storage required to represent the posterior distribution of the latent feature assignment matrix
Recurrently Exploring Class-wise Attention in A Hybrid Convolutional and Bidirectional LSTM Network for Multi-label Aerial Image Classification
Aerial image classification is of great significance in remote sensing
community, and many researches have been conducted over the past few years.
Among these studies, most of them focus on categorizing an image into one
semantic label, while in the real world, an aerial image is often associated
with multiple labels, e.g., multiple object-level labels in our case. Besides,
a comprehensive picture of present objects in a given high resolution aerial
image can provide more in-depth understanding of the studied region. For these
reasons, aerial image multi-label classification has been attracting increasing
attention. However, one common limitation shared by existing methods in the
community is that the co-occurrence relationship of various classes, so called
class dependency, is underexplored and leads to an inconsiderate decision. In
this paper, we propose a novel end-to-end network, namely class-wise
attention-based convolutional and bidirectional LSTM network (CA-Conv-BiLSTM),
for this task. The proposed network consists of three indispensable components:
1) a feature extraction module, 2) a class attention learning layer, and 3) a
bidirectional LSTM-based sub-network. Particularly, the feature extraction
module is designed for extracting fine-grained semantic feature maps, while the
class attention learning layer aims at capturing discriminative class-specific
features. As the most important part, the bidirectional LSTM-based sub-network
models the underlying class dependency in both directions and produce
structured multiple object labels. Experimental results on UCM multi-label
dataset and DFC15 multi-label dataset validate the effectiveness of our model
quantitatively and qualitatively
Hierarchical ensemble methods for protein function prediction
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware \u201cflat\u201d prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a \u201cconsensus\u201d ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research
A text segmentation approach for automated annotation of online customer reviews, based on topic modeling
Online customer review classification and analysis have been recognized as an important problem in many domains, such as business intelligence, marketing, and e-governance. To solve this problem, a variety of machine learning methods was developed in the past decade. Existing methods, however, either rely on human labeling or have high computing cost, or both. This makes them a poor fit to deal with dynamic and ever-growing collections of short but semantically noisy texts of customer reviews. In the present study, the problem of multi-topic online review clustering is addressed by generating high quality bronze-standard labeled sets for training efficient classifier models. A novel unsupervised algorithm is developed to break reviews into sequential semantically homogeneous segments. Segment data is then used to fine-tune a Latent Dirichlet Allocation (LDA) model obtained for the reviews, and to classify them along categories detected through topic modeling. After testing the segmentation algorithm on a benchmark text collection, it was successfully applied in a case study of tourism review classification. In all experiments conducted, the proposed approach produced results similar to or better than baseline methods. The paper critically discusses the main findings and paves ways for future work
Semantic Interleaving Global Channel Attention for Multilabel Remote Sensing Image Classification
Multi-Label Remote Sensing Image Classification (MLRSIC) has received
increasing research interest. Taking the cooccurrence relationship of multiple
labels as additional information helps to improve the performance of this task.
Current methods focus on using it to constrain the final feature output of a
Convolutional Neural Network (CNN). On the one hand, these methods do not make
full use of label correlation to form feature representation. On the other
hand, they increase the label noise sensitivity of the system, resulting in
poor robustness. In this paper, a novel method called Semantic Interleaving
Global Channel Attention (SIGNA) is proposed for MLRSIC. First, the label
co-occurrence graph is obtained according to the statistical information of the
data set. The label co-occurrence graph is used as the input of the Graph
Neural Network (GNN) to generate optimal feature representations. Then, the
semantic features and visual features are interleaved, to guide the feature
expression of the image from the original feature space to the semantic feature
space with embedded label relations. SIGNA triggers global attention of feature
maps channels in a new semantic feature space to extract more important visual
features. Multihead SIGNA based feature adaptive weighting networks are
proposed to act on any layer of CNN in a plug-and-play manner. For remote
sensing images, better classification performance can be achieved by inserting
CNN into the shallow layer. We conduct extensive experimental comparisons on
three data sets: UCM data set, AID data set, and DFC15 data set. Experimental
results demonstrate that the proposed SIGNA achieves superior classification
performance compared to state-of-the-art (SOTA) methods. It is worth mentioning
that the codes of this paper will be open to the community for reproducibility
research. Our codes are available at https://github.com/kyle-one/SIGNA.Comment: 14 pages, 13 figure
Doctor of Philosophy
dissertationMachine learning is the science of building predictive models from data that automatically improve based on past experience. To learn these models, traditional learning algorithms require labeled data. They also require that the entire dataset fits in the memory of a single machine. Labeled data are available or can be acquired for small and moderately sized datasets but curating large datasets can be prohibitively expensive. Similarly, massive datasets are usually too huge to fit into the memory of a single machine. An alternative is to distribute the dataset over multiple machines. Distributed learning, however, poses new challenges as most existing machine learning techniques are inherently sequential. Additionally, these distributed approaches have to be designed keeping in mind various resource limitations of real-world settings, prime among them being intermachine communication. With the advent of big datasets machine learning algorithms are facing new challenges. Their design is no longer limited to minimizing some loss function but, additionally, needs to consider other resources that are critical when learning at scale. In this thesis, we explore different models and measures for learning with limited resources that have a budget. What budgetary constraints are posed by modern datasets? Can we reuse or combine existing machine learning paradigms to address these challenges at scale? How does the cost metrics change when we shift to distributed models for learning? These are some of the questions that have been investigated in this thesis. The answers to these questions hold the key to addressing some of the challenges faced when learning on massive datasets. In the first part of this thesis, we present three different budgeted scenarios that deal with scarcity of labeled data and limited computational resources. The goal is to leverage transfer information from related domains to learn under budgetary constraints. Our proposed techniques comprise semisupervised transfer, online transfer and active transfer. In the second part of this thesis, we study distributed learning with limited communication. We present initial sampling based results, as well as, propose communication protocols for learning distributed linear classifiers
- …