504 research outputs found

    An Electronic Mach-Zehnder Interferometer

    Full text link
    Double-slit electron interferometers, fabricated in high mobility two-dimensional electron gas (2DEG), proved to be very powerful tools in studying coherent wave-like phenomena in mesoscopic systems. However, they suffer from small fringe visibility due to the many channels in each slit and poor sensitivity to small currents due to their open geometry. Moreover, the interferometers do not function in a high magnetic field, namely, in the quantum Hall effect (QHE) regime, since it destroys the symmetry between left and right slits. Here, we report on the fabrication and operation of a novel, single channel, two-path electron interferometer that functions in a high magnetic field. It is the first electronic analog of the well-known optical Mach-Zehnder (MZ) interferometer. Based on single edge state and closed geometry transport in the QHE regime the interferometer is highly sensitive and exhibits very high visibility (62%). However, the interference pattern decays precipitously with increasing electron temperature or energy. While we do not understand the reason for the dephasing we show, via shot noise measurement, that it is not a decoherence process that results from inelastic scattering events.Comment: to appear in Natur

    Lethal Mutants and Truncated Selection Together Solve a Paradox of the Origin of Life

    Get PDF
    BACKGROUND: Many attempts have been made to describe the origin of life, one of which is Eigen's cycle of autocatalytic reactions [Eigen M (1971) Naturwissenschaften 58, 465-523], in which primordial life molecules are replicated with limited accuracy through autocatalytic reactions. For successful evolution, the information carrier (either RNA or DNA or their precursor) must be transmitted to the next generation with a minimal number of misprints. In Eigen's theory, the maximum chain length that could be maintained is restricted to 100-1000 nucleotides, while for the most primitive genome the length is around 7000-20,000. This is the famous error catastrophe paradox. How to solve this puzzle is an interesting and important problem in the theory of the origin of life. METHODOLOGY/PRINCIPAL FINDINGS: We use methods of statistical physics to solve this paradox by carefully analyzing the implications of neutral and lethal mutants, and truncated selection (i.e., when fitness is zero after a certain Hamming distance from the master sequence) for the critical chain length. While neutral mutants play an important role in evolution, they do not provide a solution to the paradox. We have found that lethal mutants and truncated selection together can solve the error catastrophe paradox. There is a principal difference between prebiotic molecule self-replication and proto-cell self-replication stages in the origin of life. CONCLUSIONS/SIGNIFICANCE: We have applied methods of statistical physics to make an important breakthrough in the molecular theory of the origin of life. Our results will inspire further studies on the molecular theory of the origin of life and biological evolution

    The Cosmology of Composite Inelastic Dark Matter

    Get PDF
    Composite dark matter is a natural setting for implementing inelastic dark matter - the O(100 keV) mass splitting arises from spin-spin interactions of constituent fermions. In models where the constituents are charged under an axial U(1) gauge symmetry that also couples to the Standard Model quarks, dark matter scatters inelastically off Standard Model nuclei and can explain the DAMA/LIBRA annual modulation signal. This article describes the early Universe cosmology of a minimal implementation of a composite inelastic dark matter model where the dark matter is a meson composed of a light and a heavy quark. The synthesis of the constituent quarks into dark mesons and baryons results in several qualitatively different configurations of the resulting dark matter hadrons depending on the relative mass scales in the system.Comment: 31 pages, 4 figures; references added, typos correcte

    Thermodynamic Basis for the Emergence of Genomes during Prebiotic Evolution

    Get PDF
    The RNA world hypothesis views modern organisms as descendants of RNA molecules. The earliest RNA molecules must have been random sequences, from which the first genomes that coded for polymerase ribozymes emerged. The quasispecies theory by Eigen predicts the existence of an error threshold limiting genomic stability during such transitions, but does not address the spontaneity of changes. Following a recent theoretical approach, we applied the quasispecies theory combined with kinetic/thermodynamic descriptions of RNA replication to analyze the collective behavior of RNA replicators based on known experimental kinetics data. We find that, with increasing fidelity (relative rate of base-extension for Watson-Crick versus mismatched base pairs), replications without enzymes, with ribozymes, and with protein-based polymerases are above, near, and below a critical point, respectively. The prebiotic evolution therefore must have crossed this critical region. Over large regions of the phase diagram, fitness increases with increasing fidelity, biasing random drifts in sequence space toward ‘crystallization.’ This region encloses the experimental nonenzymatic fidelity value, favoring evolutions toward polymerase sequences with ever higher fidelity, despite error rates above the error catastrophe threshold. Our work shows that experimentally characterized kinetics and thermodynamics of RNA replication allow us to determine the physicochemical conditions required for the spontaneous crystallization of biological information. Our findings also suggest that among many potential oligomers capable of templated replication, RNAs may have evolved to form prebiotic genomes due to the value of their nonenzymatic fidelity

    Consistency analysis of metabolic correlation networks

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Metabolic correlation networks are derived from the covariance of metabolites in replicates of metabolomics experiments. They constitute an interesting intermediate between topology (i.e. the system's architecture defined by the set of reactions between metabolites) and dynamics (i.e. the metabolic concentrations observed as fluctuations around steady-state values in the metabolic network).</p> <p>Results</p> <p>Here we analyze, how such a correlation network changes over time, and compare the relative positions of metabolites in the correlation networks with those in established metabolic networks derived from genome databases. We find that network similarity indeed decreases with an increasing time difference between these networks during a day/night course and, counter intuitively, that proximity of metabolites in the correlation network is no indicator of proximity of the metabolites in the metabolic network.</p> <p>Conclusion</p> <p>The organizing principles of correlation networks are distinct from those of metabolic reaction maps. Time courses of correlation networks may in the future prove an important data source for understanding these organizing principles.</p

    Slepian functions and their use in signal estimation and spectral analysis

    Full text link
    It is a well-known fact that mathematical functions that are timelimited (or spacelimited) cannot be simultaneously bandlimited (in frequency). Yet the finite precision of measurement and computation unavoidably bandlimits our observation and modeling scientific data, and we often only have access to, or are only interested in, a study area that is temporally or spatially bounded. In the geosciences we may be interested in spectrally modeling a time series defined only on a certain interval, or we may want to characterize a specific geographical area observed using an effectively bandlimited measurement device. It is clear that analyzing and representing scientific data of this kind will be facilitated if a basis of functions can be found that are "spatiospectrally" concentrated, i.e. "localized" in both domains at the same time. Here, we give a theoretical overview of one particular approach to this "concentration" problem, as originally proposed for time series by Slepian and coworkers, in the 1960s. We show how this framework leads to practical algorithms and statistically performant methods for the analysis of signals and their power spectra in one and two dimensions, and on the surface of a sphere.Comment: Submitted to the Handbook of Geomathematics, edited by Willi Freeden, Zuhair M. Nashed and Thomas Sonar, and to be published by Springer Verla

    A comprehensive re-analysis of the Golden Spike data: Towards a benchmark for differential expression methods

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Golden Spike data set has been used to validate a number of methods for summarizing Affymetrix data sets, sometimes with seemingly contradictory results. Much less use has been made of this data set to evaluate differential expression methods. It has been suggested that this data set should not be used for method comparison due to a number of inherent flaws.</p> <p>Results</p> <p>We have used this data set in a comparison of methods which is far more extensive than any previous study. We outline six stages in the analysis pipeline where decisions need to be made, and show how the results of these decisions can lead to the apparently contradictory results previously found. We also show that, while flawed, this data set is still a useful tool for method comparison, particularly for identifying combinations of summarization and differential expression methods that are unlikely to perform well on real data sets. We describe a new benchmark, AffyDEComp, that can be used for such a comparison.</p> <p>Conclusion</p> <p>We conclude with recommendations for preferred Affymetrix analysis tools, and for the development of future spike-in data sets.</p

    Scalar and vector Slepian functions, spherical signal estimation and spectral analysis

    Full text link
    It is a well-known fact that mathematical functions that are timelimited (or spacelimited) cannot be simultaneously bandlimited (in frequency). Yet the finite precision of measurement and computation unavoidably bandlimits our observation and modeling scientific data, and we often only have access to, or are only interested in, a study area that is temporally or spatially bounded. In the geosciences we may be interested in spectrally modeling a time series defined only on a certain interval, or we may want to characterize a specific geographical area observed using an effectively bandlimited measurement device. It is clear that analyzing and representing scientific data of this kind will be facilitated if a basis of functions can be found that are "spatiospectrally" concentrated, i.e. "localized" in both domains at the same time. Here, we give a theoretical overview of one particular approach to this "concentration" problem, as originally proposed for time series by Slepian and coworkers, in the 1960s. We show how this framework leads to practical algorithms and statistically performant methods for the analysis of signals and their power spectra in one and two dimensions, and, particularly for applications in the geosciences, for scalar and vectorial signals defined on the surface of a unit sphere.Comment: Submitted to the 2nd Edition of the Handbook of Geomathematics, edited by Willi Freeden, Zuhair M. Nashed and Thomas Sonar, and to be published by Springer Verlag. This is a slightly modified but expanded version of the paper arxiv:0909.5368 that appeared in the 1st Edition of the Handbook, when it was called: Slepian functions and their use in signal estimation and spectral analysi

    Determinants of immigration strategies in male crested macaques (Macaca nigra).

    Get PDF
    Immigration into a new group can produce substantial costs due to resistance from residents, but also reproductive benefits. Whether or not individuals base their immigration strategy on prospective costbenefit ratios remains unknown. We investigated individual immigration decisions in crested macaques, a primate species with a high reproductive skew in favour of high-ranking males. We found two different strategies. Males who achieved low rank in the new group usually immigrated after another male had immigrated within the previous 25 days and achieved high rank. They never got injured but also had low prospective reproductive success. We assume that these males benefitted from immigrating into a destabilized male hierarchy. Males who achieved high rank in the new group usually immigrated independent of previous immigrations. They recieved injuries more frequently and therefore bore immigration costs. They, however, also had higher reproductive success prospects. We conclude that male crested macaques base their immigration strategy on relative fighting ability and thus potential rank in the new group i.e. potential reproductive benefits, as well as potential costs of injury

    Methods for comparative metagenomics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Metagenomics is a rapidly growing field of research that aims at studying uncultured organisms to understand the true diversity of microbes, their functions, cooperation and evolution, in environments such as soil, water, ancient remains of animals, or the digestive system of animals and humans. The recent development of ultra-high throughput sequencing technologies, which do not require cloning or PCR amplification, and can produce huge numbers of DNA reads at an affordable cost, has boosted the number and scope of metagenomic sequencing projects. Increasingly, there is a need for new ways of comparing multiple metagenomics datasets, and for fast and user-friendly implementations of such approaches.</p> <p>Results</p> <p>This paper introduces a number of new methods for interactively exploring, analyzing and comparing multiple metagenomic datasets, which will be made freely available in a new, comparative version 2.0 of the stand-alone metagenome analysis tool MEGAN.</p> <p>Conclusion</p> <p>There is a great need for powerful and user-friendly tools for comparative analysis of metagenomic data and MEGAN 2.0 will help to fill this gap.</p
    corecore