24,210 research outputs found
Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets
This paper introduces new algorithms and data structures for quick counting
for machine learning datasets. We focus on the counting task of constructing
contingency tables, but our approach is also applicable to counting the number
of records in a dataset that match conjunctive queries. Subject to certain
assumptions, the costs of these operations can be shown to be independent of
the number of records in the dataset and loglinear in the number of non-zero
entries in the contingency table. We provide a very sparse data structure, the
ADtree, to minimize memory use. We provide analytical worst-case bounds for
this structure for several models of data distribution. We empirically
demonstrate that tractably-sized data structures can be produced for large
real-world datasets by (a) using a sparse tree structure that never allocates
memory for counts of zero, (b) never allocating memory for counts that can be
deduced from other counts, and (c) not bothering to expand the tree fully near
its leaves. We show how the ADtree can be used to accelerate Bayes net
structure finding algorithms, rule learning algorithms, and feature selection
algorithms, and we provide a number of empirical results comparing ADtree
methods against traditional direct counting approaches. We also discuss the
possible uses of ADtrees in other machine learning methods, and discuss the
merits of ADtrees in comparison with alternative representations such as
kd-trees, R-trees and Frequent Sets.Comment: See http://www.jair.org/ for any accompanying file
SAGE: Sequential Attribute Generator for Analyzing Glioblastomas using Limited Dataset
While deep learning approaches have shown remarkable performance in many
imaging tasks, most of these methods rely on availability of large quantities
of data. Medical image data, however, is scarce and fragmented. Generative
Adversarial Networks (GANs) have recently been very effective in handling such
datasets by generating more data. If the datasets are very small, however, GANs
cannot learn the data distribution properly, resulting in less diverse or
low-quality results. One such limited dataset is that for the concurrent gain
of 19 and 20 chromosomes (19/20 co-gain), a mutation with positive prognostic
value in Glioblastomas (GBM). In this paper, we detect imaging biomarkers for
the mutation to streamline the extensive and invasive prognosis pipeline. Since
this mutation is relatively rare, i.e. small dataset, we propose a novel
generative framework - the Sequential Attribute GEnerator (SAGE), that
generates detailed tumor imaging features while learning from a limited
dataset. Experiments show that not only does SAGE generate high quality tumors
when compared to standard Deep Convolutional GAN (DC-GAN) and Wasserstein GAN
with Gradient Penalty (WGAN-GP), it also captures the imaging biomarkers
accurately
BINet: Multi-perspective Business Process Anomaly Classification
In this paper, we introduce BINet, a neural network architecture for
real-time multi-perspective anomaly detection in business process event logs.
BINet is designed to handle both the control flow and the data perspective of a
business process. Additionally, we propose a set of heuristics for setting the
threshold of an anomaly detection algorithm automatically. We demonstrate that
BINet can be used to detect anomalies in event logs not only on a case level
but also on event attribute level. Finally, we demonstrate that a simple set of
rules can be used to utilize the output of BINet for anomaly classification. We
compare BINet to eight other state-of-the-art anomaly detection algorithms and
evaluate their performance on an elaborate data corpus of 29 synthetic and 15
real-life event logs. BINet outperforms all other methods both on the synthetic
as well as on the real-life datasets
A Two-step Statistical Approach for Inferring Network Traffic Demands (Revises Technical Report BUCS-2003-003)
Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study a two-step approach for inferring network traffic demands. First we elaborate and evaluate a modeling approach for generating good starting points to be fed to iterative statistical inference techniques. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we evaluate and compare alternative mechanisms for generating starting points and the convergence characteristics of our EM algorithm against a recently proposed Weighted Least Squares approach.National Science Foundation (ANI-0095988, EIA-0202067, ITR ANI-0205294
- …