145,910 research outputs found

    Collective modes in the fluxonium qubit

    Get PDF
    Superconducting qubit designs vary in complexity from single- and few-junction systems, such as the transmon and flux qubits, to the many-junction fluxonium. Here we consider the question of wether the many degrees of freedom in the fluxonium circuit can limit the qubit coherence time. Such a limitation is in principle possible, due to the interactions between the low-energy, highly anharmonic qubit mode and the higher-energy, weakly anharmonic collective modes. We show that so long as the coupling of the collective modes with the external electromagnetic environment is sufficiently weaker than the qubit-environment coupling, the qubit dephasing induced by the collective modes does not significantly contribute to decoherence. Therefore, the increased complexity of the fluxonium qubit does not constitute by itself a major obstacle for its use in quantum computation architectures.Comment: 22 pages, 15 figure

    On Some Computations on Sparse Polynomials

    Get PDF
    In arithmetic circuit complexity the standard operations are +,x. Yet, in some scenarios exponentiation gates are considered as well. In this paper we study the question of efficiently evaluating a polynomial given an oracle access to its power. Among applications, we show that: * A reconstruction algorithm for a circuit class c can be extended to handle f^e for f in C. * There exists an efficient deterministic algorithm for factoring sparse multiquadratic polynomials. * There is a deterministic algorithm for testing a factorization of sparse polynomials, with constant individual degrees, into sparse irreducible factors. That is, testing if f = g_1 x ... x g_m when f has constant individual degrees and g_i-s are irreducible. * There is a deterministic reconstruction algorithm for multilinear depth-4 circuits with two multiplication gates. * There exists an efficient deterministic algorithm for testing whether two powers of sparse polynomials are equal. That is, f^d = g^e when f and g are sparse

    The complexity of Boolean functions from cryptographic viewpoint

    Get PDF
    Cryptographic Boolean functions must be complex to satisfy Shannon\u27s principle of confusion. But the cryptographic viewpoint on complexity is not the same as in circuit complexity. The two main criteria evaluating the cryptographic complexity of Boolean functions on F2nF_2^n are the nonlinearity (and more generally the rr-th order nonlinearity, for every positive r<nr< n) and the algebraic degree. Two other criteria have also been considered: the algebraic thickness and the non-normality. After recalling the definitions of these criteria and why, asymptotically, almost all Boolean functions are deeply non-normal and have high algebraic degrees, high (rr-th order) nonlinearities and high algebraic thicknesses, we study the relationship between the rr-th order nonlinearity and a recent cryptographic criterion called the algebraic immunity. This relationship strengthens the reasons why the algebraic immunity can be considered as a further cryptographic complexity criterion

    Fault Tolerance Implementation within SRAM Based FPGA Designs based upon Single Event Upset Occurrence Rates

    Get PDF
    Emerging technology is enabling the design community to consistently expand the amount of functionality that can be implemented within Integrated Circuits (ICs). As the number of gates placed within an FPGA increases, the complexity of the design can grow exponentially. Consequently, the ability to create reliable circuits has become an incredibly difficult task. In order to ease the complexity of design completion, the commercial design community has developed a very rigid (but effective) design methodology based on synchronous circuit techniques. In order to create faster, smaller and lower power circuits, transistor geometries and core voltages have decreased. In environments that contain ionizing energy, such a combination will increase the probability of Single Event Upsets (SEUs) and will consequently affect the state space of a circuit. In order to combat the effects of radiation, the aerospace community has developed several "Hardened by Design" (fault tolerant) design schemes. This paper will address design mitigation schemes targeted for SRAM Based FPGA CMOS devices. Because some mitigation schemes may be over zealous (too much power, area, complexity, etc.. . .), the designer should be conscious that system requirements can ease the amount of mitigation necessary for acceptable operation. Therefore, various degrees of Fault Tolerance will be demonstrated along with an analysis of its effectiveness

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure

    De Sitter Space as a Tensor Network: Cosmic No-Hair, Complementarity, and Complexity

    Get PDF
    We investigate the proposed connection between de Sitter spacetime and the MERA (Multiscale Entanglement Renormalization Ansatz) tensor network, and ask what can be learned via such a construction. We show that the quantum state obeys a cosmic no-hair theorem: the reduced density operator describing a causal patch of the MERA asymptotes to a fixed point of a quantum channel, just as spacetimes with a positive cosmological constant asymptote to de Sitter. The MERA is potentially compatible with a weak form of complementarity (local physics only describes single patches at a time, but the overall Hilbert space is infinite-dimensional) or, with certain specific modifications to the tensor structure, a strong form (the entire theory describes only a single patch plus its horizon, in a finite-dimensional Hilbert space). We also suggest that de Sitter evolution has an interpretation in terms of circuit complexity, as has been conjectured for anti-de Sitter space.Comment: 24 pages, 12 figures. Updated to be consistent with PRD versio

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure
    • ā€¦
    corecore