1,870 research outputs found
Evaluating Overfit and Underfit in Models of Network Community Structure
A common data mining task on networks is community detection, which seeks an
unsupervised decomposition of a network into structural groups based on
statistical regularities in the network's connectivity. Although many methods
exist, the No Free Lunch theorem for community detection implies that each
makes some kind of tradeoff, and no algorithm can be optimal on all inputs.
Thus, different algorithms will over or underfit on different inputs, finding
more, fewer, or just different communities than is optimal, and evaluation
methods that use a metadata partition as a ground truth will produce misleading
conclusions about general accuracy. Here, we present a broad evaluation of over
and underfitting in community detection, comparing the behavior of 16
state-of-the-art community detection algorithms on a novel and structurally
diverse corpus of 406 real-world networks. We find that (i) algorithms vary
widely both in the number of communities they find and in their corresponding
composition, given the same input, (ii) algorithms can be clustered into
distinct high-level groups based on similarities of their outputs on real-world
networks, and (iii) these differences induce wide variation in accuracy on link
prediction and link description tasks. We introduce a new diagnostic for
evaluating overfitting and underfitting in practice, and use it to roughly
divide community detection methods into general and specialized learning
algorithms. Across methods and inputs, Bayesian techniques based on the
stochastic block model and a minimum description length approach to
regularization represent the best general learning approach, but can be
outperformed under specific circumstances. These results introduce both a
theoretically principled approach to evaluate over and underfitting in models
of network community structure and a realistic benchmark by which new methods
may be evaluated and compared.Comment: 22 pages, 13 figures, 3 table
Bayesian emulation for optimization in multi-step portfolio decisions
We discuss the Bayesian emulation approach to computational solution of
multi-step portfolio studies in financial time series. "Bayesian emulation for
decisions" involves mapping the technical structure of a decision analysis
problem to that of Bayesian inference in a purely synthetic "emulating"
statistical model. This provides access to standard posterior analytic,
simulation and optimization methods that yield indirect solutions of the
decision problem. We develop this in time series portfolio analysis using
classes of economically and psychologically relevant multi-step ahead portfolio
utility functions. Studies with multivariate currency, commodity and stock
index time series illustrate the approach and show some of the practical
utility and benefits of the Bayesian emulation methodology.Comment: 24 pages, 7 figures, 2 table
Latent Conjunctive Bayesian Network: Unify Attribute Hierarchy and Bayesian Network for Cognitive Diagnosis
Cognitive diagnostic assessment aims to measure specific knowledge structures
in students. To model data arising from such assessments, cognitive diagnostic
models with discrete latent variables have gained popularity in educational and
behavioral sciences. In a learning context, the latent variables often denote
sequentially acquired skill attributes, which is often modeled by the so-called
attribute hierarchy method. One drawback of the traditional attribute hierarchy
method is that its parameter complexity varies substantially with the
hierarchy's graph structure, lacking statistical parsimony. Additionally,
arrows among the attributes do not carry an interpretation of statistical
dependence. Motivated by these, we propose a new family of latent conjunctive
Bayesian networks (LCBNs), which rigorously unify the attribute hierarchy
method for sequential skill mastery and the Bayesian network model in
statistical machine learning. In an LCBN, the latent graph not only retains the
hard constraints on skill prerequisites as an attribute hierarchy, but also
encodes nice conditional independence interpretation as a Bayesian network.
LCBNs are identifiable, interpretable, and parsimonious statistical tools to
diagnose students' cognitive abilities from assessment data. We propose an
efficient two-step EM algorithm for structure learning and parameter estimation
in LCBNs. Application of our method to an international educational assessment
dataset gives interpretable findings of cognitive diagnosis
Covariance Estimation: The GLM and Regularization Perspectives
Finding an unconstrained and statistically interpretable reparameterization
of a covariance matrix is still an open problem in statistics. Its solution is
of central importance in covariance estimation, particularly in the recent
high-dimensional data environment where enforcing the positive-definiteness
constraint could be computationally expensive. We provide a survey of the
progress made in modeling covariance matrices from two relatively complementary
perspectives: (1) generalized linear models (GLM) or parsimony and use of
covariates in low dimensions, and (2) regularization or sparsity for
high-dimensional data. An emerging, unifying and powerful trend in both
perspectives is that of reducing a covariance estimation problem to that of
estimating a sequence of regression problems. We point out several instances of
the regression-based formulation. A notable case is in sparse estimation of a
precision matrix or a Gaussian graphical model leading to the fast graphical
LASSO algorithm. Some advantages and limitations of the regression-based
Cholesky decomposition relative to the classical spectral (eigenvalue) and
variance-correlation decompositions are highlighted. The former provides an
unconstrained and statistically interpretable reparameterization, and
guarantees the positive-definiteness of the estimated covariance matrix. It
reduces the unintuitive task of covariance estimation to that of modeling a
sequence of regressions at the cost of imposing an a priori order among the
variables. Elementwise regularization of the sample covariance matrix such as
banding, tapering and thresholding has desirable asymptotic properties and the
sparse estimated covariance matrix is positive definite with probability
tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Efficient Correlated Topic Modeling with Topic Embedding
Correlated topic modeling has been limited to small model and problem sizes
due to their high computational cost and poor scaling. In this paper, we
propose a new model which learns compact topic embeddings and captures topic
correlations through the closeness between the topic vectors. Our method
enables efficient inference in the low-dimensional embedding space, reducing
previous cubic or quadratic time complexity to linear w.r.t the topic size. We
further speedup variational inference with a fast sampler to exploit sparsity
of topic occurrence. Extensive experiments show that our approach is capable of
handling model and data scales which are several orders of magnitude larger
than existing correlation results, without sacrificing modeling quality by
providing competitive or superior performance in document classification and
retrieval.Comment: KDD 2017 oral. The first two authors contributed equall
Different approaches to community detection
A precise definition of what constitutes a community in networks has remained
elusive. Consequently, network scientists have compared community detection
algorithms on benchmark networks with a particular form of community structure
and classified them based on the mathematical techniques they employ. However,
this comparison can be misleading because apparent similarities in their
mathematical machinery can disguise different reasons for why we would want to
employ community detection in the first place. Here we provide a focused review
of these different motivations that underpin community detection. This
problem-driven classification is useful in applied network science, where it is
important to select an appropriate algorithm for the given purpose. Moreover,
highlighting the different approaches to community detection also delineates
the many lines of research and points out open directions and avenues for
future research.Comment: 14 pages, 2 figures. Written as a chapter for forthcoming Advances in
network clustering and blockmodeling, and based on an extended version of The
many facets of community detection in complex networks, Appl. Netw. Sci. 2: 4
(2017) by the same author
- …