133,574 research outputs found
Efficient learning of large sets of locally optimal classification rules
Conventional rule learning algorithms aim at finding a set of simple rules,
where each rule covers as many examples as possible. In this paper, we argue
that the rules found in this way may not be the optimal explanations for each
of the examples they cover. Instead, we propose an efficient algorithm that
aims at finding the best rule covering each training example in a greedy
optimization consisting of one specialization and one generalization loop.
These locally optimal rules are collected and then filtered for a final rule
set, which is much larger than the sets learned by conventional rule learning
algorithms. A new example is classified by selecting the best among the rules
that cover this example. In our experiments on small to very large datasets,
the approach's average classification accuracy is higher than that of
state-of-the-art rule learning algorithms. Moreover, the algorithm is highly
efficient and can inherently be processed in parallel without affecting the
learned rule set and so the classification accuracy. We thus believe that it
closes an important gap for large-scale classification rule induction.Comment: article, 40 pages, Machine Learning journal (2023
Fast Shortest Path Distance Estimation in Large Networks
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications.
In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks.
We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random.
Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.Yahoo! Research (internship
An efficient randomised sphere cover classifier
This paper describes an efficient randomised sphere cover classifier(aRSC), that reduces the training data set size without loss of accuracy when compared to nearest neighbour classifiers. The motivation for developing this algorithm is the desire to have a non-deterministic, fast, instance-based classifier that performs well in isolation but is also ideal for use with ensembles. We use 24 benchmark datasets from UCI repository and six gene expression datasets for evaluation. The first set of experiments demonstrate the basic benefits of sphere covering. The second set of experiments demonstrate that when we set the a parameter through cross validation, the resulting aRSC algorithm outperforms several well known classifiers when compared using the Friedman rank sum test. Thirdly, we test the usefulness of aRSC when used with three feature filtering filters on six gene expression datasets. Finally, we highlight the benefits of pruning with a bias/variance decompositio
High-Performance Reachability Query Processing under Index Size Restrictions
In this paper, we propose a scalable and highly efficient index structure for
the reachability problem over graphs. We build on the well-known node interval
labeling scheme where the set of vertices reachable from a particular node is
compactly encoded as a collection of node identifier ranges. We impose an
explicit bound on the size of the index and flexibly assign approximate
reachability ranges to nodes of the graph such that the number of index probes
to answer a query is minimized. The resulting tunable index structure generates
a better range labeling if the space budget is increased, thus providing a
direct control over the trade off between index size and the query processing
performance. By using a fast recursive querying method in conjunction with our
index structure, we show that in practice, reachability queries can be answered
in the order of microseconds on an off-the-shelf computer - even for the case
of massive-scale real world graphs. Our claims are supported by an extensive
set of experimental results using a multitude of benchmark and real-world
web-scale graph datasets.Comment: 30 page
Principles of Dataset Versioning: Exploring the Recreation/Storage Tradeoff
The relative ease of collaborative data science and analysis has led to a
proliferation of many thousands or millions of of the same datasets
in many scientific and commercial domains, acquired or constructed at various
stages of data analysis across many users, and often over long periods of time.
Managing, storing, and recreating these dataset versions is a non-trivial task.
The fundamental challenge here is the : the more
storage we use, the faster it is to recreate or retrieve versions, while the
less storage we use, the slower it is to recreate or retrieve versions. Despite
the fundamental nature of this problem, there has been a surprisingly little
amount of work on it. In this paper, we study this trade-off in a principled
manner: we formulate six problems under various settings, trading off these
quantities in various ways, demonstrate that most of the problems are
intractable, and propose a suite of inexpensive heuristics drawing from
techniques in delay-constrained scheduling, and spanning tree literature, to
solve these problems. We have built a prototype version management system, that
aims to serve as a foundation to our DATAHUB system for facilitating
collaborative data science. We demonstrate, via extensive experiments, that our
proposed heuristics provide efficient solutions in practical dataset versioning
scenarios
Fast Detection of Community Structures using Graph Traversal in Social Networks
Finding community structures in social networks is considered to be a
challenging task as many of the proposed algorithms are computationally
expensive and does not scale well for large graphs. Most of the community
detection algorithms proposed till date are unsuitable for applications that
would require detection of communities in real-time, especially for massive
networks. The Louvain method, which uses modularity maximization to detect
clusters, is usually considered to be one of the fastest community detection
algorithms even without any provable bound on its running time. We propose a
novel graph traversal-based community detection framework, which not only runs
faster than the Louvain method but also generates clusters of better quality
for most of the benchmark datasets. We show that our algorithms run in O(|V | +
|E|) time to create an initial cover before using modularity maximization to
get the final cover.
Keywords - community detection; Influenced Neighbor Score; brokers; community
nodes; communitiesComment: 29 pages, 9 tables, and 13 figures. Accepted in "Knowledge and
Information Systems", 201
- …