5,369 research outputs found
Multifractal Characterization of Protein Contact Networks
The multifractal detrended fluctuation analysis of time series is able to
reveal the presence of long-range correlations and, at the same time, to
characterize the self-similarity of the series. The rich information derivable
from the characteristic exponents and the multifractal spectrum can be further
analyzed to discover important insights about the underlying dynamical process.
In this paper, we employ multifractal analysis techniques in the study of
protein contact networks. To this end, initially a network is mapped to three
different time series, each of which is generated by a stationary unbiased
random walk. To capture the peculiarities of the networks at different levels,
we accordingly consider three observables at each vertex: the degree, the
clustering coefficient, and the closeness centrality. To compare the results
with suitable references, we consider also instances of three well-known
network models and two typical time series with pure monofractal and
multifractal properties. The first result of notable interest is that time
series associated to proteins contact networks exhibit long-range correlations
(strong persistence), which are consistent with signals in-between the typical
monofractal and multifractal behavior. Successively, a suitable embedding of
the multifractal spectra allows to focus on ensemble properties, which in turn
gives us the possibility to make further observations regarding the considered
networks. In particular, we highlight the different role that small and large
fluctuations of the considered observables play in the characterization of the
network topology
Words cluster phonetically beyond phonotactic regularities
Recent evidence suggests that cognitive pressures associated with language acquisition and use could affect the organization of the lexicon. On one hand, consistent with noisy channel models of language (e.g., Levy, 2008), the phonological distance between wordforms should be maximized to avoid perceptual confusability (a pressure for dispersion). On the other hand, a lexicon with high phonological regularity would be simpler to learn, remember and produce (e.g., Monaghan et al., 2011) (a pressure for clumpiness). Here we investigate wordform similarity in the lexicon, using measures of word distance (e.g., phonological neighborhood density) to ask whether there is evidence for dispersion or clumpiness of wordforms in the lexicon. We develop a novel method to compare lexicons to phonotactically-controlled baselines that provide a null hypothesis for how clumpy or sparse wordforms would be as the result of only phonotactics. Results for four languages, Dutch, English, German and French, show that the space of monomorphemic wordforms is clumpier than what would be expected by the best chance model according to a wide variety of measures: minimal pairs, average Levenshtein distance and several network properties. This suggests a fundamental drive for regularity in the lexicon that conflicts with the pressure for words to be as phonologically distinct as possible. Keywords: Linguistics; Lexical design; Communication;
Phonotactic
Probabilistic Meta-Representations Of Neural Networks
Existing Bayesian treatments of neural networks are typically characterized
by weak prior and approximate posterior distributions according to which all
the weights are drawn independently. Here, we consider a richer prior
distribution in which units in the network are represented by latent variables,
and the weights between units are drawn conditionally on the values of the
collection of those variables. This allows rich correlations between related
weights, and can be seen as realizing a function prior with a Bayesian
complexity regularizer ensuring simple solutions. We illustrate the resulting
meta-representations and representations, elucidating the power of this prior.Comment: presented at UAI 2018 Uncertainty In Deep Learning Workshop (UDL AUG.
2018
- …