824 research outputs found

    Maladaptation and the paradox of robustness in evolution

    Get PDF
    Background. Organisms use a variety of mechanisms to protect themselves against perturbations. For example, repair mechanisms fix damage, feedback loops keep homeostatic systems at their setpoints, and biochemical filters distinguish signal from noise. Such buffering mechanisms are often discussed in terms of robustness, which may be measured by reduced sensitivity of performance to perturbations. Methodology/Principal Findings. I use a mathematical model to analyze the evolutionary dynamics of robustness in order to understand aspects of organismal design by natural selection. I focus on two characters: one character performs an adaptive task; the other character buffers the performance of the first character against perturbations. Increased perturbations favor enhanced buffering and robustness, which in turn decreases sensitivity and reduces the intensity of natural selection on the adaptive character. Reduced selective pressure on the adaptive character often leads to a less costly, lower performance trait. Conclusions/Significance. The paradox of robustness arises from evolutionary dynamics: enhanced robustness causes an evolutionary reduction in the adaptive performance of the target character, leading to a degree of maladaptation compared to what could be achieved by natural selection in the absence of robustness mechanisms. Over evolutionary time, buffering traits may become layered on top of each other, while the underlying adaptive traits become replaced by cheaper, lower performance components. The paradox of robustness has widespread implications for understanding organismal design

    Indexing multi-dimensional uncertain data with arbitrary probability density functions

    Get PDF
    Research Session 26: Spatial and Temporal DatabasesIn an "uncertain database", an object o is associated with a multi-dimensional probability density function (pdf), which describes the likelihood that o appears at each position in the data space. A fundamental operation is the "probabilistic range search" which, given a value p q and a rectangular area r q, retrieves the objects that appear in r q with probabilities at least p q. In this paper, we propose the U-tree, an access method designed to optimize both the I/O and CPU time of range retrieval on multi-dimensional imprecise data. The new structure is fully dynamic (i.e., objects can be incrementally inserted/deleted in any order), and does not place any constraints on the data pdfs. We verify the query and update efficiency of U-trees with extensive experiments.postprintThe 31st International Conference on Very Large Data Bases (VLDB 2005), Trondheim, Norway, 30 August-2 September 2005. In Proceedings of 31st VLDB, 2005, v. 3, p. 922-93

    An analysis of the local optima storage capacity of Hopfield network based fitness function models

    Get PDF
    A Hopfield Neural Network (HNN) with a new weight update rule can be treated as a second order Estimation of Distribution Algorithm (EDA) or Fitness Function Model (FFM) for solving optimisation problems. The HNN models promising solutions and has a capacity for storing a certain number of local optima as low energy attractors. Solutions are generated by sampling the patterns stored in the attractors. The number of attractors a network can store (its capacity) has an impact on solution diversity and, consequently solution quality. This paper introduces two new HNN learning rules and presents the Hopfield EDA (HEDA), which learns weight values from samples of the fitness function. It investigates the attractor storage capacity of the HEDA and shows it to be equal to that known in the literature for a standard HNN. The relationship between HEDA capacity and linkage order is also investigated

    Kinetic proofreading of gene activation by chromatin remodeling

    Full text link
    Gene activation in eukaryotes involves the concerted action of histone tail modifiers, chromatin remodellers and transcription factors, whose precise coordination is currently unknown. We demonstrate that the experimentally observed interactions of the molecules are in accord with a kinetic proofreading scheme. Our finding could provide a basis for the development of quantitative models for gene regulation in eukaryotes based on the combinatorical interactions of chromatin modifiers.Comment: 8 pages, 2 Figures; application adde

    Sequence learning in Associative Neuronal-Astrocytic Network

    Full text link
    The neuronal paradigm of studying the brain has left us with limitations in both our understanding of how neurons process information to achieve biological intelligence and how such knowledge may be translated into artificial intelligence and its most brain-derived branch, neuromorphic computing. Overturning our fundamental assumptions of how the brain works, the recent exploration of astrocytes is revealing that these long-neglected brain cells dynamically regulate learning by interacting with neuronal activity at the synaptic level. Following recent experimental evidence, we designed an associative, Hopfield-type, neuronal-astrocytic network and analyzed the dynamics of the interaction between neurons and astrocytes. We show that astrocytes were sufficient to trigger transitions between learned memories in the neuronal component of the network. Further, we mathematically derived the timing of the transitions that was governed by the dynamics of the calcium-dependent slow-currents in the astrocytic processes. Overall, we provide a brain-morphic mechanism for sequence learning that is inspired by, and aligns with, recent experimental findings. To evaluate our model, we emulated astrocytic atrophy and showed that memory recall becomes significantly impaired after a critical point of affected astrocytes was reached. This brain-inspired and brain-validated approach supports our ongoing efforts to incorporate non-neuronal computing elements in neuromorphic information processing.Comment: 8 pages, 5 figure

    Is there a no-go theorem for superradiant quantum phase transitions in cavity and circuit QED ?

    Get PDF
    In cavity quantum electrodynamics (QED), the interaction between an atomic transition and the cavity field is measured by the vacuum Rabi frequency Ω0\Omega_0. The analogous term "circuit QED" has been introduced for Josephson junctions, because superconducting circuits behave as artificial atoms coupled to the bosonic field of a resonator. In the regime with Ω0\Omega_0 comparable to the two-level transition frequency, "superradiant" quantum phase transitions for the cavity vacuum have been predicted, e.g. within the Dicke model. Here, we prove that if the time-independent light-matter Hamiltonian is considered, a superradiant quantum critical point is forbidden for electric dipole atomic transitions due to the oscillator strength sum rule. In circuit QED, the capacitive coupling is analogous to the electric dipole one: yet, such no-go property can be circumvented by Cooper pair boxes capacitively coupled to a resonator, due to their peculiar Hilbert space topology and a violation of the corresponding sum rule

    Macroscopic coherence of a single exciton state in a polydiacetylene organic quantum wire

    Full text link
    We show that a single exciton state in an individual ordered conjugated polymer chain exhibits macroscopic quantum spatial coherence reaching tens of microns, limited by the chain length. The spatial coherence of the k=0 exciton state is demonstrated by selecting two spatially separated emitting regions of the chain and observing their interference.Comment: 12 pages with 2 figure

    Weak pairwise correlations imply strongly correlated network states in a neural population

    Get PDF
    Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher order interactions among large groups of elements play an important role. In the vertebrate retina, we show that weak correlations between pairs of neurons coexist with strongly collective behavior in the responses of ten or more neurons. Surprisingly, we find that this collective behavior is described quantitatively by models that capture the observed pairwise correlations but assume no higher order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behavior. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.Comment: Full account of work presented at the conference on Computational and Systems Neuroscience (COSYNE), 17-20 March 2005, in Salt Lake City, Utah (http://cosyne.org
    • …
    corecore