24,609 research outputs found
Intrinsically Dynamic Network Communities
Community finding algorithms for networks have recently been extended to
dynamic data. Most of these recent methods aim at exhibiting community
partitions from successive graph snapshots and thereafter connecting or
smoothing these partitions using clever time-dependent features and sampling
techniques. These approaches are nonetheless achieving longitudinal rather than
dynamic community detection. We assume that communities are fundamentally
defined by the repetition of interactions among a set of nodes over time.
According to this definition, analyzing the data by considering successive
snapshots induces a significant loss of information: we suggest that it blurs
essentially dynamic phenomena - such as communities based on repeated
inter-temporal interactions, nodes switching from a community to another across
time, or the possibility that a community survives while its members are being
integrally replaced over a longer time period. We propose a formalism which
aims at tackling this issue in the context of time-directed datasets (such as
citation networks), and present several illustrations on both empirical and
synthetic dynamic networks. We eventually introduce intrinsically dynamic
metrics to qualify temporal community structure and emphasize their possible
role as an estimator of the quality of the community detection - taking into
account the fact that various empirical contexts may call for distinct
`community' definitions and detection criteria.Comment: 27 pages, 11 figure
On the detectability of non-trivial topologies
We explore the main physical processes which potentially affect the
topological signal in the Cosmic Microwave Background (CMB) for a range of
toroidal universes. We consider specifically reionisation, the integrated
Sachs-Wolfe (ISW) effect, the size of the causal horizon, topological defects
and primordial gravitational waves. We use three estimators: the information
content, the S/N statistic and the Bayesian evidence. While reionisation has
nearly no effect on the estimators, we show that taking into account the ISW
strongly decreases our ability to detect the topological signal. We also study
the impact of varying the relevant cosmological parameters within the 2 sigma
ranges allowed by present data. We find that only Omega_Lambda, which
influences both ISW and the size of the causal horizon, significantly alters
the detection for all three estimators considered here.Comment: 11 pages, 9 figure
Recommended from our members
A tutorial on cue combination and Signal Detection Theory: Using changes in sensitivity to evaluate how observers integrate sensory information
Many sensory inputs contain multiple sources of information (âcuesâ), such as two sounds of different frequencies, or a voice heard in unison with moving lips. Often, each cue provides a separate estimate of the same physical attribute, such as the size or location of an object. An ideal observer can exploit such redundant sensory information to improve the accuracy of their perceptual judgments. For example, if each cue is modeled as an independent, Gaussian, random variable, then combining Ncues should provide up to a âN improvement in detection/discrimination sensitivity. Alternatively, a less efficient observer may base their decision on only a subset of the available information, and so gain little or no benefit from having access to multiple sources of information. Here we use Signal Detection Theory to formulate and compare various models of cue-combination, many of which are commonly used to explain empirical data. We alert the reader to the key assumptions inherent in each model, and provide formulas for deriving quantitative predictions. Code is also provided for simulating each model, allowing expected levels of measurement error to be quantified. Based on these results, it is shown that predicted sensitivity often differs surprisingly little between qualitatively distinct models of combination. This means that sensitivity alone is not sufficient for understanding decision efficiency, and the implications of this are discussed
Forecasting the industrial production index for the euro area through forecasts for the main countries
The aim of the present work is to obtain short-term predictions of the monthly volume of the industrial production of the euro area. Preliminary information on the behaviour of this variable is needed, since the index is released with a lag of about two months. A model based on the US industrial production index and on the single-country forecasts of the production indices for the main euro-area countries is proposed.prediction, industrial production, forecast combination, encompassing
Bayesian analysis of spatially distorted cosmic signals from Poissonian data
Reconstructing the matter density field from galaxy counts is a problem
frequently addressed in current literature. Two main sources of error are shot
noise from galaxy counts and insufficient knowledge of the correct galaxy
position caused by peculiar velocities and redshift measurement uncertainty.
Here we address the reconstruction problem of a Poissonian sampled log-normal
density field with velocity distortions in a Bayesian way via a maximum a
posteriory method. We test our algorithm on a 1D toy case and find significant
improvement compared to simple data inversion. In particular, we address the
following problems: photometric redshifts, mapping of extended sources in coded
mask systems, real space reconstruction from redshift space galaxy distribution
and combined analysis of data with different point spread functions.Comment: 19 pages, 10 figures, accepte
How Sample Completeness Affects Gamma-Ray Burst Classification
Unsupervised pattern recognition algorithms support the existence of three
gamma-ray burst classes; Class I (long, large fluence bursts of intermediate
spectral hardness), Class II (short, small fluence, hard bursts), and Class III
(soft bursts of intermediate durations and fluences). The algorithms
surprisingly assign larger membership to Class III than to either of the other
two classes. A known systematic bias has been previously used to explain the
existence of Class III in terms of Class I; this bias allows the fluences and
durations of some bursts to be underestimated (Hakkila et al., ApJ 538, 165,
2000). We show that this bias primarily affects only the longest bursts and
cannot explain the bulk of the Class III properties. We resolve the question of
Class III existence by demonstrating how samples obtained using standard
trigger mechanisms fail to preserve the duration characteristics of small peak
flux bursts. Sample incompleteness is thus primarily responsible for the
existence of Class III. In order to avoid this incompleteness, we show how a
new dual timescale peak flux can be defined in terms of peak flux and fluence.
The dual timescale peak flux preserves the duration distribution of faint
bursts and correlates better with spectral hardness (and presumably redshift)
than either peak flux or fluence. The techniques presented here are generic and
have applicability to the studies of other transient events. The results also
indicate that pattern recognition algorithms are sensitive to sample
completeness; this can influence the study of large astronomical databases such
as those found in a Virtual Observatory.Comment: 29 pages, 6 figures, 3 tables, Accepted for publication in The
Astrophysical Journa
- âŠ