219,102 research outputs found

    Energy-Consumption Advantage of Quantum Computation

    Full text link
    Energy consumption in solving computational problems has been gaining growing attention as a part of the performance measures of computers. Quantum computation is known to offer advantages over classical computation in terms of various computational resources; however, its advantage in energy consumption has been challenging to analyze due to the lack of a theoretical foundation to relate the physical notion of energy and the computer-scientific notion of complexity for quantum computation with finite computational resources. To bridge this gap, we introduce a general framework for studying energy consumption of quantum and classical computation based on a computational model with a black-box oracle, as conventionally used for studying query complexity in computational complexity theory. With this framework, we derive an upper bound of energy consumption of quantum computation with covering all costs, including those of initialization, control, and quantum error correction; in particular, our analysis shows an energy-consumption bound for a finite-step Landauer-erasure protocol, progressing beyond the existing asymptotic bound. We also develop techniques for proving a lower bound of energy consumption of classical computation based on the energy-conservation law and the Landauer-erasure bound; significantly, our lower bound can be gapped away from zero no matter how energy-efficiently we implement the computation and is free from the computational hardness assumptions. Based on these general bounds, we rigorously prove that quantum computation achieves an exponential energy-consumption advantage over classical computation for Simon's problem. These results provide a fundamental framework and techniques to explore the physical meaning of quantum advantage in the query-complexity setting based on energy consumption, opening an alternative way to study the advantages of quantum computation.Comment: 36 pages, 3 figure

    The Capabilities of Chaos and Complexity

    Get PDF
    To what degree could chaos and complexity have organized a Peptide or RNA World of crude yet necessarily integrated protometabolism? How far could such protolife evolve in the absence of a heritable linear digital symbol system that could mutate, instruct, regulate, optimize and maintain metabolic homeostasis? To address these questions, chaos, complexity, self-ordered states, and organization must all be carefully defined and distinguished. In addition their cause-and-effect relationships and mechanisms of action must be delineated. Are there any formal (non physical, abstract, conceptual, algorithmic) components to chaos, complexity, self-ordering and organization, or are they entirely physicodynamic (physical, mass/energy interaction alone)? Chaos and complexity can produce some fascinating self-ordered phenomena. But can spontaneous chaos and complexity steer events and processes toward pragmatic benefit, select function over non function, optimize algorithms, integrate circuits, produce computational halting, organize processes into formal systems, control and regulate existing systems toward greater efficiency? The question is pursued of whether there might be some yet-to-be discovered new law of biology that will elucidate the derivation of prescriptive information and control. “System” will be rigorously defined. Can a low-informational rapid succession of Prigogine’s dissipative structures self-order into bona fide organization

    Interfacial Interaction Enhanced Rheological Behavior in PAM/CTAC/Salt Aqueous Solution—A Coarse-Grained Molecular Dynamics Study

    Get PDF
    Interfacial interactions within a multi-phase polymer solution play critical roles in processing control and mass transportation in chemical engineering. However, the understandings of these roles remain unexplored due to the complexity of the system. In this study, we used an efficient analytical method—a nonequilibrium molecular dynamics (NEMD) simulation—to unveil the molecular interactions and rheology of a multiphase solution containing cetyltrimethyl ammonium chloride (CTAC), polyacrylamide (PAM), and sodium salicylate (NaSal). The associated macroscopic rheological characteristics and shear viscosity of the polymer/surfactant solution were investigated, where the computational results agreed well with the experimental data. The relation between the characteristic time and shear rate was consistent with the power law. By simulating the shear viscosity of the polymer/surfactant solution, we found that the phase transition of micelles within the mixture led to a non-monotonic increase in the viscosity of the mixed solution with the increase in concentration of CTAC or PAM. We expect this optimized molecular dynamic approach to advance the current understanding on chemical–physical interactions within polymer/surfactant mixtures at the molecular level and enable emerging engineering solutions

    Maximum-Entropy-Model-Enabled Complexity Reduction Algorithm in Modern Video Coding Standards

    Get PDF
    Symmetry considerations play a key role in modern science, and any differentiable symmetry of the action of a physical system has a corresponding conservation law. Symmetry may be regarded as reduction of Entropy. This work focuses on reducing the computational complexity of modern video coding standards by using the maximum entropy principle. The high computational complexity of the coding unit (CU) size decision in modern video coding standards is a critical challenge for real-time applications. This problem is solved in a novel approach considering CU termination, skip, and normal decisions as three-class making problems. The maximum entropy model (MEM) is formulated to the CU size decision problem, which can optimize the conditional entropy; the improved iterative scaling (IIS) algorithm is used to solve this optimization problem. The classification features consist of the spatio-temporal information of the CU, including the rate–distortion (RD) cost, coded block flag (CBF), and depth. For the case analysis, the proposed method is based on High Efficiency Video Coding (H.265/HEVC) standards. The experimental results demonstrate that the proposed method can reduce the computational complexity of the H.265/HEVC encoder significantly. Compared with the H.265/HEVC reference model, the proposed method can reduce the average encoding time by 53.27% and 56.36% under low delay and random access configurations, while Bjontegaard Delta Bit Rates (BD-BRs) are 0.72% and 0.93% on average

    The noisy and marvelous molecular world of biology

    Get PDF
    At the molecular level biology is intrinsically noisy. The forces that regulate the myriad of molecular reactions in the cell are tiny, on the order of piconewtons (10−12 Newtons), yet they proceed in concerted action making life possible. Understanding how this is possible is one of the most fundamental questions biophysicists would like to understand. Single molecule experiments offer an opportunity to delve into the fundamental laws that make biological complexity surface in a physical world governed by the second law of thermodynamics. Techniques such as force spectroscopy, fluorescence, microfluidics, molecular sequencing, and computational studies project a view of the biomolecular world ruled by the conspiracy between the disorganizing forces due to thermal motion and the cosmic evolutionary drive. Here we will digress on some of the evidences in support of this view and the role of physical information in biology

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure

    Complex Systems: A Survey

    Full text link
    A complex system is a system composed of many interacting parts, often called agents, which displays collective behavior that does not follow trivially from the behaviors of the individual parts. Examples include condensed matter systems, ecosystems, stock markets and economies, biological evolution, and indeed the whole of human society. Substantial progress has been made in the quantitative understanding of complex systems, particularly since the 1980s, using a combination of basic theory, much of it derived from physics, and computer simulation. The subject is a broad one, drawing on techniques and ideas from a wide range of areas. Here I give a survey of the main themes and methods of complex systems science and an annotated bibliography of resources, ranging from classic papers to recent books and reviews.Comment: 10 page
    • 

    corecore