53,506 research outputs found
Recommended from our members
On Nonregularized Estimation of Psychological Networks.
An important goal for psychological science is developing methods to characterize relationships between variables. Customary approaches use structural equation models to connect latent factors to a number of observed measurements, or test causal hypotheses between observed variables. More recently, regularized partial correlation networks have been proposed as an alternative approach for characterizing relationships among variables through off-diagonal elements in the precision matrix. While the graphical Lasso (glasso) has emerged as the default network estimation method, it was optimized in fields outside of psychology with very different needs, such as high dimensional data where the number of variables (p) exceeds the number of observations (n). In this article, we describe the glasso method in the context of the fields where it was developed, and then we demonstrate that the advantages of regularization diminish in settings where psychological networks are often fitted ( p≪n ). We first show that improved properties of the precision matrix, such as eigenvalue estimation, and predictive accuracy with cross-validation are not always appreciable. We then introduce nonregularized methods based on multiple regression and a nonparametric bootstrap strategy, after which we characterize performance with extensive simulations. Our results demonstrate that the nonregularized methods can be used to reduce the false-positive rate, compared to glasso, and they appear to provide consistent performance across sparsity levels, sample composition (p/n), and partial correlation size. We end by reviewing recent findings in the statistics literature that suggest alternative methods often have superior performance than glasso, as well as suggesting areas for future research in psychology. The nonregularized methods have been implemented in the R package GGMnonreg
Co-Clustering Network-Constrained Trajectory Data
Recently, clustering moving object trajectories kept gaining interest from
both the data mining and machine learning communities. This problem, however,
was studied mainly and extensively in the setting where moving objects can move
freely on the euclidean space. In this paper, we study the problem of
clustering trajectories of vehicles whose movement is restricted by the
underlying road network. We model relations between these trajectories and road
segments as a bipartite graph and we try to cluster its vertices. We
demonstrate our approaches on synthetic data and show how it could be useful in
inferring knowledge about the flow dynamics and the behavior of the drivers
using the road network
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Two betweenness centrality measures based on Randomized Shortest Paths
This paper introduces two new closely related betweenness centrality measures
based on the Randomized Shortest Paths (RSP) framework, which fill a gap
between traditional network centrality measures based on shortest paths and
more recent methods considering random walks or current flows. The framework
defines Boltzmann probability distributions over paths of the network which
focus on the shortest paths, but also take into account longer paths depending
on an inverse temperature parameter. RSP's have previously proven to be useful
in defining distance measures on networks. In this work we study their utility
in quantifying the importance of the nodes of a network. The proposed RSP
betweenness centralities combine, in an optimal way, the ideas of using the
shortest and purely random paths for analysing the roles of network nodes,
avoiding issues involving these two paradigms. We present the derivations of
these measures and how they can be computed in an efficient way. In addition,
we show with real world examples the potential of the RSP betweenness
centralities in identifying interesting nodes of a network that more
traditional methods might fail to notice.Comment: Minor updates; published in Scientific Report
Reliable inference for complex models by discriminative composite likelihood estimation
Composite likelihood estimation has an important role in the analysis of
multivariate data for which the full likelihood function is intractable. An
important issue in composite likelihood inference is the choice of the weights
associated with lower-dimensional data sub-sets, since the presence of
incompatible sub-models can deteriorate the accuracy of the resulting
estimator. In this paper, we introduce a new approach for simultaneous
parameter estimation by tilting, or re-weighting, each sub-likelihood component
called discriminative composite likelihood estimation (D-McLE). The
data-adaptive weights maximize the composite likelihood function, subject to
moving a given distance from uniform weights; then, the resulting weights can
be used to rank lower-dimensional likelihoods in terms of their influence in
the composite likelihood function. Our analytical findings and numerical
examples support the stability of the resulting estimator compared to
estimators constructed using standard composition strategies based on uniform
weights. The properties of the new method are illustrated through simulated
data and real spatial data on multivariate precipitation extremes.Comment: 29 pages, 4 figure
Discrete Temporal Models of Social Networks
We propose a family of statistical models for social network evolution over
time, which represents an extension of Exponential Random Graph Models (ERGMs).
Many of the methods for ERGMs are readily adapted for these models, including
maximum likelihood estimation algorithms. We discuss models of this type and
their properties, and give examples, as well as a demonstration of their use
for hypothesis testing and classification. We believe our temporal ERG models
represent a useful new framework for modeling time-evolving social networks,
and rewiring networks from other domains such as gene regulation circuitry, and
communication networks
- …