285,239 research outputs found

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    A topological realization of the congruence subgroup Kernel A

    Full text link
    A number of years ago, Kumar Murty pointed out to me that the computation of the fundamental group of a Hilbert modular surface ([7],IV,§{\S}6), and the computation of the congruence subgroup kernel of SL(2) ([6]) were surprisingly similar. We puzzled over this, in particular over the role of elementary matrices in both computations. We formulated a very general result on the fundamental group of a Satake compactification of a locally symmetric space. This lead to our joint paper [1] with Lizhen Ji and Les Saper on these fundamental groups. Although the results in it were intriguingly similar to the corresponding calculations of the congruence subgroup kernel of the underlying algebraic group in [5], we were not able to demonstrate a direct connection (cf. [1], §{\S}7). The purpose of this note is to explain such a connection. A covering space is constructed from inverse limits of reductive Borel-Serre compactifications. The congruence subgroup kernel then appears as the group of deck transformations of this covering. The key to this is the computation of the fundamental group in [1]

    Solar wind test of the de Broglie-Proca's massive photon with Cluster multi-spacecraft data

    Get PDF
    Our understanding of the universe at large and small scales relies largely on electromagnetic observations. As photons are the messengers, fundamental physics has a concern in testing their properties, including the absence of mass. We use Cluster four spacecraft data in the solar wind at 1 AU to estimate the mass upper limit for the photon. We look for deviations from Amp\`ere's law, through the curlometer technique for the computation of the magnetic field, and through the measurements of ion and electron velocities for the computation of the current. We show that the upper bound for mγm_\gamma lies between 1.4×10491.4 \times 10^{-49} and 3.4×10513.4 \times 10^{-51} kg, and thereby discuss the currently accepted lower limits in the solar wind.Comment: The paper points out that actual photon mass upper limits (in the solar wind) are too optimistic and model based. We instead perform a much more experiment oriented measurement. This version matches that accepted by Astroparticle Physic

    Fundamental Limits to Nonlinear Energy Harvesting

    Get PDF
    Linear and nonlinear vibration energy harvesting has been the focus of considerable research in recent years. However, fundamental limits on the harvestable energy of a harvester subjected to an arbitrary excitation force and different constraints is not yet fully understood. Understanding these limits is not only essential for an assessment of the technology potential, but it also provides a broader perspective on the current harvesting mechanisms and guidance in their improvement. Here, we derive the fundamental limits on the output power of an ideal energy harvester for arbitrary excitation waveforms and build on the current analysis framework for the simple computation of this limit for more sophisticated setups. We show that the optimal harvester maximizes the harvested energy through a mechanical analog of a buy-low-sell-high strategy. We also propose a nonresonant passive latch-assisted harvester to realize this strategy for an effective harvesting. It is shown that the proposed harvester harvests energy more effectively than its linear and bistable counterparts over a wider range of excitation frequencies and amplitudes. The buy-low-sell-high strategy also reveals why the conventional bistable harvester works well at low-frequency excitation

    Second law, entropy production, and reversibility in thermodynamics of information

    Full text link
    We present a pedagogical review of the fundamental concepts in thermodynamics of information, by focusing on the second law of thermodynamics and the entropy production. Especially, we discuss the relationship among thermodynamic reversibility, logical reversibility, and heat emission in the context of the Landauer principle and clarify that these three concepts are fundamentally distinct to each other. We also discuss thermodynamics of measurement and feedback control by Maxwell's demon. We clarify that the demon and the second law are indeed consistent in the measurement and the feedback processes individually, by including the mutual information to the entropy production.Comment: 43 pages, 10 figures. As a chapter of: G. Snider et al. (eds.), "Energy Limits in Computation: A Review of Landauer's Principle, Theory and Experiments

    Generalized geometric quantum speed limits

    Get PDF
    The attempt to gain a theoretical understanding of the concept of time in quantum mechanics has triggered significant progress towards the search for faster and more efficient quantum technologies. One of such advances consists in the interpretation of the time-energy uncertainty relations as lower bounds for the minimal evolution time between two distinguishable states of a quantum system, also known as quantum speed limits. We investigate how the nonuniqueness of a bona fide measure of distinguishability defined on the quantum-state space affects the quantum speed limits and can be exploited in order to derive improved bounds. Specifically, we establish an infinite family of quantum speed limits valid for unitary and nonunitary evolutions, based on an elegant information geometric formalism. Our work unifies and generalizes existing results on quantum speed limits and provides instances of novel bounds that are tighter than any established one based on the conventional quantum Fisher information. We illustrate our findings with relevant examples, demonstrating the importance of choosing different information metrics for open system dynamics, as well as clarifying the roles of classical populations versus quantum coherences, in the determination and saturation of the speed limits. Our results can find applications in the optimization and control of quantum technologies such as quantum computation and metrology, and might provide new insights in fundamental investigations of quantum thermodynamics

    Limits to parallelism in scientific computing

    Get PDF
    The goal of our research is to decrease the execution time of scientific computing applications. We exploit the application\u27s inherent parallelism to achieve this goal. This exploitation is expensive as we analyze sequential applications and port them to parallel computers. Many scientifically computational problems appear to have considerable exploitable parallelism; however, upon implementing a parallel solution on a parallel computer, limits to the parallelism are encountered. Unfortunately, many of these limits are characteristic of a specific parallel computer. This thesis explores these limits.;We study the feasibility of exploiting the inherent parallelism of four NASA scientific computing applications. We use simple models to predict each application\u27s degree of parallelism at several levels of granularity. From this analysis, we conclude that it is infeasible to exploit the inherent parallelism of two of the four applications. The interprocessor communication of one application is too expensive relative to its computation cost. The input and output costs of the other application are too expensive relative to its computation cost. We exploit the parallelism of the remaining two applications and measure their performance on an Intel iPSC/2 parallel computer. We parallelize an Optimal Control Boundary Value Problem. This guidance control problem determines an optimal trajectory of a boat in a river. We parallelize the Carbon Dioxide Slicing technique which is a macrophysical cloud property retrieval algorithm. This technique computes the height at the top of a cloud using cloud imager measurements. We consider the feasibility of exploiting its massive parallelism on a MasPar MP-2 parallel computer. We conclude that many limits to parallelism are surmountable while other limits are inescapable.;From these limits, we elucidate some fundamental issues that must be considered when porting similar problems to yet-to-be designed computers. We conclude that the technological improvements to reduce the isolation of computational units frees a programmer from many of the programmer\u27s current concerns about the granularity of the work. We also conclude that the technological improvements to relax the regimented guidance of the computational units allows a programmer to exploit the inherent heterogeneous parallelism of many applications

    Higgs boson mass and electroweak observables in the MRSSM

    Get PDF
    R-symmetry is a fundamental symmetry which can solve the SUSY flavor problem and relax the search limits on SUSY masses. Here we provide a complete next-to-leading order computation and discussion of the lightest Higgs boson mass, the W boson mass and muon decay in the minimal R-symmetric SUSY model (MRSSM). This model contains non-MSSM particles including a Higgs triplet, Dirac gauginos and higgsinos, and leads to significant new tree-level and one-loop contributions to these observables. We show that the model can accommodate the measured values of the observables for interesting regions of parameter space with stop masses of order 1 TeV in spite of the absence of stop mixing. We characterize these regions and provide typical benchmark points, which are also checked against further experimental constraints. A detailed exposition of the model, its mass matrices and its Feynman rules relevant for computations in this paper is also provided.Comment: added references, matches the published versio

    Performance limits and trade-offs in entropy-driven biochemical computers

    Get PDF
    The properties and fundamental limits of chemical computers have recently attracted significant interest as a model of computation, an unifying principle of cellular organisation and in the context of bio-engineering. As of yet, research in this topic is based on case-studies. There exists no generally accepted criterion to distinguish between chemical processes that compute and those that do not. Here, the concept of entropy driven computer (EDC) is proposed as a general model of chemical computation. It is found that entropy driven computation is subject to a trade-off between accuracy and entropy production, but unlike many biological systems, there are no trade-offs involving time. The latter only arise when it is taken into account that the observation of the state of the EDC is not energy neutral, but comes at a cost. The significance of this conclusion in relation to biological systems is discussed. Three examples of biological computers, including an implementation of a neural network as an EDC are given
    corecore