1,875 research outputs found
Extending local features with contextual information in graph kernels
Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.Comment: To appear in ICONIP 201
Approximate Minimum Diameter
We study the minimum diameter problem for a set of inexact points. By
inexact, we mean that the precise location of the points is not known. Instead,
the location of each point is restricted to a contineus region (\impre model)
or a finite set of points (\indec model). Given a set of inexact points in
one of \impre or \indec models, we wish to provide a lower-bound on the
diameter of the real points.
In the first part of the paper, we focus on \indec model. We present an
time
approximation algorithm of factor for finding minimum diameter
of a set of points in dimensions. This improves the previously proposed
algorithms for this problem substantially.
Next, we consider the problem in \impre model. In -dimensional space, we
propose a polynomial time -approximation algorithm. In addition, for
, we define the notion of -separability and use our algorithm for
\indec model to obtain -approximation algorithm for a set of
-separable regions in time
The Early Bird Catches The Term: Combining Twitter and News Data For Event Detection and Situational Awareness
Twitter updates now represent an enormous stream of information originating
from a wide variety of formal and informal sources, much of which is relevant
to real-world events. In this paper we adapt existing bio-surveillance
algorithms to detect localised spikes in Twitter activity corresponding to real
events with a high level of confidence. We then develop a methodology to
automatically summarise these events, both by providing the tweets which fully
describe the event and by linking to highly relevant news articles. We apply
our methods to outbreaks of illness and events strongly affecting sentiment. In
both case studies we are able to detect events verifiable by third party
sources and produce high quality summaries
Towards Efficient Sequential Pattern Mining in Temporal Uncertain Databases
Uncertain sequence databases are widely used to model data with inaccurate or imprecise timestamps in many real world applications. In this paper, we use uniform distributions to model uncertain timestamps and adopt possible world semantics to interpret temporal uncertain database. We design an incremental approach to manage temporal uncertainty efficiently, which is integrated into the classic pattern-growth SPM algorithm to mine uncertain sequential patterns. Extensive experiments prove that our algorithm performs well in both efficiency and scalability
Conformative Filtering for Implicit Feedback Data
Implicit feedback is the simplest form of user feedback that can be used for
item recommendation. It is easy to collect and is domain independent. However,
there is a lack of negative examples. Previous work tackles this problem by
assuming that users are not interested or not as much interested in the
unconsumed items. Those assumptions are often severely violated since
non-consumption can be due to factors like unawareness or lack of resources.
Therefore, non-consumption by a user does not always mean disinterest or
irrelevance. In this paper, we propose a novel method called Conformative
Filtering (CoF) to address the issue. The motivating observation is that if
there is a large group of users who share the same taste and none of them have
consumed an item before, then it is likely that the item is not of interest to
the group. We perform multidimensional clustering on implicit feedback data
using hierarchical latent tree analysis (HLTA) to identify user `tastes' groups
and make recommendations for a user based on her memberships in the groups and
on the past behavior of the groups. Experiments on two real-world datasets from
different domains show that CoF has superior performance compared to several
common baselines
Mining Uncertain Sequential Patterns in Iterative MapReduce
This paper proposes a sequential pattern mining (SPM) algorithm in large scale uncertain databases. Uncertain sequence databases are widely used to model inaccurate or imprecise timestamped data in many real applications, where traditional SPM algorithms are inapplicable because of data uncertainty and scalability. In this paper, we develop an efficient approach to manage data uncertainty in SPM and design an iterative MapReduce framework to execute the uncertain SPM algorithm in parallel. We conduct extensive experiments in both synthetic and real uncertain datasets. And the experimental results prove that our algorithm is efficient and scalable
Measuring Relations Between Concepts In Conceptual Spaces
The highly influential framework of conceptual spaces provides a geometric
way of representing knowledge. Instances are represented by points in a
high-dimensional space and concepts are represented by regions in this space.
Our recent mathematical formalization of this framework is capable of
representing correlations between different domains in a geometric way. In this
paper, we extend our formalization by providing quantitative mathematical
definitions for the notions of concept size, subsethood, implication,
similarity, and betweenness. This considerably increases the representational
power of our formalization by introducing measurable ways of describing
relations between concepts.Comment: Accepted at SGAI 2017 (http://www.bcs-sgai.org/ai2017/). The final
publication is available at Springer via
https://doi.org/10.1007/978-3-319-71078-5_7. arXiv admin note: substantial
text overlap with arXiv:1707.05165, arXiv:1706.0636
A review on corpus annotation for arabic sentiment analysis
Mining publicly available data for meaning and value is an important
research direction within social media analysis. To automatically analyze
collected textual data, a manual effort is needed for a successful machine learning algorithm to effectively classify text. This pertains to annotating the text adding labels to each data entry. Arabic is one of the languages that are growing rapidly in the research of sentiment analysis, despite limited resources and scares annotated corpora. In this paper, we review the annotation process carried out by those papers. A total of 27 papers were reviewed between the
years of 2010 and 2016
How to combine visual features with tags to improve movie recommendation accuracy?
Previous works have shown the effectiveness of using stylistic visual features, indicative of the movie style, in content-based movie recommendation. However, they have mainly focused on a particular recommendation scenario, i.e., when a new movie is added to the catalogue and no information is available for that movie (New Item scenario). However, the stylistic visual features can be also used when other sources of information is available (Existing Item scenario). In this work, we address the second scenario and propose a hybrid technique that exploits not only the typical content available for the movies (e.g., tags), but also the stylistic visual content extracted form the movie files and fuse them by applying a fusion method called Canonical Correlation Analysis (CCA). Our experiments on a large catalogue of 13K movies have shown very promising results which indicates a considerable improvement of the recommendation quality by using a proper fusion of the stylistic visual features with other type of features
Sparsest factor analysis for clustering variables: a matrix decomposition approach
We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA
- …
