107 research outputs found

    Spectral statistics for unitary transfer matrices of binary graphs

    Full text link
    Quantum graphs have recently been introduced as model systems to study the spectral statistics of linear wave problems with chaotic classical limits. It is proposed here to generalise this approach by considering arbitrary, directed graphs with unitary transfer matrices. An exponentially increasing contribution to the form factor is identified when performing a diagonal summation over periodic orbit degeneracy classes. A special class of graphs, so-called binary graphs, is studied in more detail. For these, the conditions for periodic orbit pairs to be correlated (including correlations due to the unitarity of the transfer matrix) can be given explicitly. Using combinatorial techniques it is possible to perform the summation over correlated periodic orbit pair contributions to the form factor for some low--dimensional cases. Gradual convergence towards random matrix results is observed when increasing the number of vertices of the binary graphs.Comment: 18 pages, 8 figure

    Chaotic behavior and damage spreading in the Glauber Ising model - a master equation approach

    Get PDF
    We investigate the sensitivity of the time evolution of a kinetic Ising model with Glauber dynamics against the initial conditions. To do so we apply the "damage spreading" method, i.e., we study the simultaneous evolution of two identical systems subjected to the same thermal noise. We derive a master equation for the joint probability distribution of the two systems. We then solve this master equation within an effective-field approximation which goes beyond the usual mean-field approximation by retaining the fluctuations though in a quite simplistic manner. The resulting effective-field theory is applied to different physical situations. It is used to analyze the fixed points of the master equation and their stability and to identify regular and chaotic phases of the Glauber Ising model. We also discuss the relation of our results to directed percolation.Comment: 9 pages RevTeX, 4 EPS figure

    Aeroelastic Response and Protection of Space Shuttle External Tank Cable Trays

    Get PDF
    Sections of the Space Shuttle External Tank Liquid Oxygen (LO2) and Liquid Hydrogen (LH2) cable trays are shielded from potentially damaging airloads with foam Protuberance Aerodynamic Load (PAL) Ramps. Flight standard design LO2 and LH2 cable tray sections were tested with and without PAL Ramp models in the United States Air Force Arnold Engineering Development Center s (AEDC) 16T transonic wind tunnel to obtain experimental data on the aeroelastic stability and response characteristics of the trays and as part of the larger effort to determine whether the PAL ramps can be safely modified or removed. Computational Fluid Dynamic simulations of the full-stack shuttle launch configuration were used to investigate the flow characeristics around and under the cable trays without the protective PAL ramps and to define maximum crossflow Mach numbers and dynamic pressures experienced during launch. These crossflow conditions were used to establish wind tunnel test conditions which also included conservative margins. For all of the conditions and configurations tested, no aeroelastic instabilities or unacceptable dynamic response levels were encountered and no visible structural damage was experienced by any of the tested cable tray sections. Based upon this aeroelastic characterization test, three potentially acceptable alternatives are available for the LO2 cable tray PAL Ramps: Mini-Ramps, Tray Fences, or No Ramps. All configurations were tested to maximum conditions, except the LH2 trays at -15 deg. crossflow angle. This exception is the only caveat preventing the proposal of acceptable alternative configurations for the LH2 trays as well. Structural assessment of all tray loads and tray response measurements from launches following the Shuttle Return To Flight with the existing PAL Ramps will determine the acceptability of these PAL Ramp alternatives

    Valuations on lattice polytopes

    Get PDF
    This survey is on classification results for valuations defined on lattice polytopes that intertwine the special linear group over the integers. The basic real valued valuations, the coefficients of the Ehrhart polynomial, are introduced and their characterization by Betke and Kneser is discussed. More recent results include classification theorems for vector and convex body valued valuations. © Springer International Publishing AG 2017

    Search for the QCD critical point in nuclear collisions at the CERN SPS

    Get PDF
    Pion production in nuclear collisions at the SPS is investigated with the aim to search, in a restricted domain of the phase diagram, for power-laws in the behavior of correlations which are compatible with critical QCD. We have analyzed interactions of nuclei of different size (p+p, C+C, Si+Si, Pb+Pb) at 158AA GeV adopting, as appropriate observables, scaled factorial moments in a search for intermittent fluctuations in transverse dimensions. The analysis is performed for π+π\pi^+\pi^- pairs with invariant mass very close to the two-pion threshold. In this sector one may capture critical fluctuations of the sigma component in a hadronic medium, even if the σ\sigma-meson has no well defined vacuum state. It turns out that for the Pb+Pb system the proposed analysis technique cannot be applied without entering the invariant mass region with strong Coulomb correlations. As a result the treatment becomes inconclusive in this case. Our results for the other systems indicate the presence of power-law fluctuations in the freeze-out state of Si+Si approaching in size the prediction of critical QCD.Comment: 31 pages, 11 figure

    Dynamically Tuning Processor Resources with Adaptive Processing

    Get PDF
    The productivity of modern society has become inextricably linked to its ability to produce energy-efficient computing technology. Increasingly sophisticated mobile computing systems, powered for hours solely by batteries, continue to proliferate rapidly throughout society, while battery technology improves at a much slower pace. In large data centers that handle everything from online orders for a dot-com company to sophisticated Web searches, row upon row of tightly packed computers may be warehoused in a city block. Microprocessor energy wastage in such a facility directly translates into higher electric bills. Simply receiving sufficient electricity from utilities to power such a center is no longer certain. Given this situation, energy efficiency has rapidly moved to the forefront of modern microprocessor design. The adaptive processing approach to improving microprocessor energy efficiency dynamically tunes major microprocessor resources—such as caches and hardware queues—during execution to better match varying application needs.1,2 This tuning usually involves reducing the size of a resource when its full capabilities are not needed, then restoring the disabled portions when they are needed again. Dynamically tailoring processor resources in active use contrasts sharply with techniques that simply turn off entire sections of a processor when they become idle. Presenting the application with the required amount of hardware—and nothing more— throughout its execution can achieve a potentially significant reduction in energy consumption. The challenges facing adaptive processing lie in achieving this greater efficiency with reasonable hardware and software overhead, and doing so without incurring undue performance loss. Unlike reconfigurable computing, which typically uses very different technology such as FPGAs, adaptive processing exploits the dynamic superscalar design approach that developers have used successfully in many generations of general-purpose processors. Whereas reconfigurable processors must demonstrate performance or energy savings large enough to overcome very large clock frequency and circuit density disadvantages, adaptive processors typically have baseline overheads of only a few percent

    Dark sectors 2016 Workshop: community report

    Get PDF
    This report, based on the Dark Sectors workshop at SLAC in April 2016, summarizes the scientific importance of searches for dark sector dark matter and forces at masses beneath the weak-scale, the status of this broad international field, the important milestones motivating future exploration, and promising experimental opportunities to reach these milestones over the next 5-10 years

    Thermodynamic Basis for the Emergence of Genomes during Prebiotic Evolution

    Get PDF
    The RNA world hypothesis views modern organisms as descendants of RNA molecules. The earliest RNA molecules must have been random sequences, from which the first genomes that coded for polymerase ribozymes emerged. The quasispecies theory by Eigen predicts the existence of an error threshold limiting genomic stability during such transitions, but does not address the spontaneity of changes. Following a recent theoretical approach, we applied the quasispecies theory combined with kinetic/thermodynamic descriptions of RNA replication to analyze the collective behavior of RNA replicators based on known experimental kinetics data. We find that, with increasing fidelity (relative rate of base-extension for Watson-Crick versus mismatched base pairs), replications without enzymes, with ribozymes, and with protein-based polymerases are above, near, and below a critical point, respectively. The prebiotic evolution therefore must have crossed this critical region. Over large regions of the phase diagram, fitness increases with increasing fidelity, biasing random drifts in sequence space toward ‘crystallization.’ This region encloses the experimental nonenzymatic fidelity value, favoring evolutions toward polymerase sequences with ever higher fidelity, despite error rates above the error catastrophe threshold. Our work shows that experimentally characterized kinetics and thermodynamics of RNA replication allow us to determine the physicochemical conditions required for the spontaneous crystallization of biological information. Our findings also suggest that among many potential oligomers capable of templated replication, RNAs may have evolved to form prebiotic genomes due to the value of their nonenzymatic fidelity

    Lethal Mutants and Truncated Selection Together Solve a Paradox of the Origin of Life

    Get PDF
    BACKGROUND: Many attempts have been made to describe the origin of life, one of which is Eigen's cycle of autocatalytic reactions [Eigen M (1971) Naturwissenschaften 58, 465-523], in which primordial life molecules are replicated with limited accuracy through autocatalytic reactions. For successful evolution, the information carrier (either RNA or DNA or their precursor) must be transmitted to the next generation with a minimal number of misprints. In Eigen's theory, the maximum chain length that could be maintained is restricted to 100-1000 nucleotides, while for the most primitive genome the length is around 7000-20,000. This is the famous error catastrophe paradox. How to solve this puzzle is an interesting and important problem in the theory of the origin of life. METHODOLOGY/PRINCIPAL FINDINGS: We use methods of statistical physics to solve this paradox by carefully analyzing the implications of neutral and lethal mutants, and truncated selection (i.e., when fitness is zero after a certain Hamming distance from the master sequence) for the critical chain length. While neutral mutants play an important role in evolution, they do not provide a solution to the paradox. We have found that lethal mutants and truncated selection together can solve the error catastrophe paradox. There is a principal difference between prebiotic molecule self-replication and proto-cell self-replication stages in the origin of life. CONCLUSIONS/SIGNIFICANCE: We have applied methods of statistical physics to make an important breakthrough in the molecular theory of the origin of life. Our results will inspire further studies on the molecular theory of the origin of life and biological evolution
    corecore