9,472 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Unsupervised User Stance Detection on Twitter
We present a highly effective unsupervised framework for detecting the stance
of prolific Twitter users with respect to controversial topics. In particular,
we use dimensionality reduction to project users onto a low-dimensional space,
followed by clustering, which allows us to find core users that are
representative of the different stances. Our framework has three major
advantages over pre-existing methods, which are based on supervised or
semi-supervised classification. First, we do not require any prior labeling of
users: instead, we create clusters, which are much easier to label manually
afterwards, e.g., in a matter of seconds or minutes instead of hours. Second,
there is no need for domain- or topic-level knowledge either to specify the
relevant stances (labels) or to conduct the actual labeling. Third, our
framework is robust in the face of data skewness, e.g., when some users or some
stances have greater representation in the data. We experiment with different
combinations of user similarity features, dataset sizes, dimensionality
reduction methods, and clustering algorithms to ascertain the most effective
and most computationally efficient combinations across three different datasets
(in English and Turkish). We further verified our results on additional tweet
sets covering six different controversial topics. Our best combination in terms
of effectiveness and efficiency uses retweeted accounts as features, UMAP for
dimensionality reduction, and Mean Shift for clustering, and yields a small
number of high-quality user clusters, typically just 2--3, with more than 98\%
purity. The resulting user clusters can be used to train downstream
classifiers. Moreover, our framework is robust to variations in the
hyper-parameter values and also with respect to random initialization
A Very Brief Introduction to Machine Learning With Applications to Communication Systems
Given the unprecedented availability of data and computing resources, there
is widespread renewed interest in applying data-driven machine learning methods
to problems for which the development of conventional engineering solutions is
challenged by modelling or algorithmic deficiencies. This tutorial-style paper
starts by addressing the questions of why and when such techniques can be
useful. It then provides a high-level introduction to the basics of supervised
and unsupervised learning. For both supervised and unsupervised learning,
exemplifying applications to communication networks are discussed by
distinguishing tasks carried out at the edge and at the cloud segments of the
network at different layers of the protocol stack
A Clustering Algorithm Based on an Ensemble of Dissimilarities: An Application in the Bioinformatics Domain
Clustering algorithms such as k-means depend heavily on choosing an appropriate distance metric that reflect accurately the object proximities. A wide range of dissimilarities may be defined that often lead to different clustering results. Choosing the best dissimilarity is an ill-posed problem and learning a general distance from the data is a complex task, particularly for high dimensional problems. Therefore, an appealing approach is to learn an ensemble of dissimilarities. In this paper, we have developed a semi-supervised clustering algorithm that learns a linear combination of dissimilarities considering incomplete knowledge in the form of pairwise constraints. The minimization of the loss function is based on a robust and efficient quadratic optimization algorithm. Besides, a regularization term is considered that controls the complexity of the distance metric learned avoiding overfitting. The algorithm has been applied to the identification of tumor samples using the gene expression profiles, where domain experts provide often incomplete knowledge in the form of pairwise constraints. We report that the algorithm proposed outperforms a standard semi-supervised clustering technique available in the literature and clustering results based on a single dissimilarity. The improvement is particularly relevant for applications with high level of noise
Kernel discriminant analysis and clustering with parsimonious Gaussian process models
This work presents a family of parsimonious Gaussian process models which
allow to build, from a finite sample, a model-based classifier in an infinite
dimensional space. The proposed parsimonious models are obtained by
constraining the eigen-decomposition of the Gaussian processes modeling each
class. This allows in particular to use non-linear mapping functions which
project the observations into infinite dimensional spaces. It is also
demonstrated that the building of the classifier can be directly done from the
observation space through a kernel function. The proposed classification method
is thus able to classify data of various types such as categorical data,
functional data or networks. Furthermore, it is possible to classify mixed data
by combining different kernels. The methodology is as well extended to the
unsupervised classification case. Experimental results on various data sets
demonstrate the effectiveness of the proposed method
- …