3,475,137 research outputs found

    Time complexity and gate complexity

    Full text link
    We formulate and investigate the simplest version of time-optimal quantum computation theory (t-QCT), where the computation time is defined by the physical one and the Hamiltonian contains only one- and two-qubit interactions. This version of t-QCT is also considered as optimality by sub-Riemannian geodesic length. The work has two aims: one is to develop a t-QCT itself based on physically natural concept of time, and the other is to pursue the possibility of using t-QCT as a tool to estimate the complexity in conventional gate-optimal quantum computation theory (g-QCT). In particular, we investigate to what extent is true the statement: time complexity is polynomial in the number of qubits if and only if so is gate complexity. In the analysis, we relate t-QCT and optimal control theory (OCT) through fidelity-optimal computation theory (f-QCT); f-QCT is equivalent to t-QCT in the limit of unit optimal fidelity, while it is formally similar to OCT. We then develop an efficient numerical scheme for f-QCT by modifying Krotov's method in OCT, which has monotonic convergence property. We implemented the scheme and obtained solutions of f-QCT and of t-QCT for the quantum Fourier transform and a unitary operator that does not have an apparent symmetry. The former has a polynomial gate complexity and the latter is expected to have exponential one because a series of generic unitary operators has a exponential gate complexity. The time complexity for the former is found to be linear in the number of qubits, which is understood naturally by the existence of an upper bound. The time complexity for the latter is exponential. Thus the both targets are examples satisfyng the statement above. The typical characteristics of the optimal Hamiltonians are symmetry under time-reversal and constancy of one-qubit operation, which are mathematically shown to hold in fairly general situations.Comment: 11 pages, 6 figure

    Real-time complexity constrained encoding

    Get PDF
    Complex software appliances can be deployed on hardware with limited available computational resources. This computational boundary puts an additional constraint on software applications. This can be an issue for real-time applications with a fixed time constraint such as low delay video encoding. In the context of High Efficiency Video Coding (HEVC), a limited number of publications have focused on controlling the complexity of an HEVC video encoder. In this paper, a technique is proposed to control complexity by deciding between 2Nx2N merge mode and full encoding, at different Coding Unit (CU) depths. The technique is demonstrated in two encoders. The results demonstrate fast convergence to a given complexity threshold, and a limited loss in rate-distortion performance (on average 2.84% Bjontegaard delta rate for 40% complexity reduction)

    Space-Time Complexity in Hamiltonian Dynamics

    Full text link
    New notions of the complexity function C(epsilon;t,s) and entropy function S(epsilon;t,s) are introduced to describe systems with nonzero or zero Lyapunov exponents or systems that exhibit strong intermittent behavior with ``flights'', trappings, weak mixing, etc. The important part of the new notions is the first appearance of epsilon-separation of initially close trajectories. The complexity function is similar to the propagator p(t0,x0;t,x) with a replacement of x by the natural lengths s of trajectories, and its introduction does not assume of the space-time independence in the process of evolution of the system. A special stress is done on the choice of variables and the replacement t by eta=ln(t), s by xi=ln(s) makes it possible to consider time-algebraic and space-algebraic complexity and some mixed cases. It is shown that for typical cases the entropy function S(epsilon;xi,eta) possesses invariants (alpha,beta) that describe the fractal dimensions of the space-time structures of trajectories. The invariants (alpha,beta) can be linked to the transport properties of the system, from one side, and to the Riemann invariants for simple waves, from the other side. This analog provides a new meaning for the transport exponent mu that can be considered as the speed of a Riemann wave in the log-phase space of the log-space-time variables. Some other applications of new notions are considered and numerical examples are presented.Comment: 27 pages, 6 figure

    New Classes of Distributed Time Complexity

    Full text link
    A number of recent papers -- e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) -- have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem Π\Pi in which a solution can be verified by checking all radius-O(1)O(1) neighbourhoods, and the question is what is the smallest TT such that a solution can be computed so that each node chooses its own output based on its radius-TT neighbourhood. Here TT is the distributed time complexity of Π\Pi. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are Θ(1)\Theta(1), Θ(logn)\Theta(\log^* n), Θ(logn)\Theta(\log n), Θ(n1/k)\Theta(n^{1/k}), and Θ(n)\Theta(n). It is also known that there are two gaps: one between ω(1)\omega(1) and o(loglogn)o(\log \log^* n), and another between ω(logn)\omega(\log^* n) and o(logn)o(\log n). It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple -- indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including Θ(logαn)\Theta(\log^{\alpha}n) for any α1\alpha\ge1, 2Θ(logαn)2^{\Theta(\log^{\alpha}n)} for any α1\alpha\le 1, and Θ(nα)\Theta(n^{\alpha}) for any α<1/2\alpha <1/2 in the high end of the complexity spectrum, and Θ(logαlogn)\Theta(\log^{\alpha}\log^* n) for any α1\alpha\ge 1, 2Θ(logαlogn)\smash{2^{\Theta(\log^{\alpha}\log^* n)}} for any α1\alpha\le 1, and Θ((logn)α)\Theta((\log^* n)^{\alpha}) for any α1\alpha \le 1 in the low end; here α\alpha is a positive rational number

    Dynamical complexity of discrete time regulatory networks

    Full text link
    Genetic regulatory networks are usually modeled by systems of coupled differential equations and by finite state models, better known as logical networks, are also used. In this paper we consider a class of models of regulatory networks which present both discrete and continuous aspects. Our models consist of a network of units, whose states are quantified by a continuous real variable. The state of each unit in the network evolves according to a contractive transformation chosen from a finite collection of possible transformations, according to a rule which depends on the state of the neighboring units. As a first approximation to the complete description of the dynamics of this networks we focus on a global characteristic, the dynamical complexity, related to the proliferation of distinguishable temporal behaviors. In this work we give explicit conditions under which explicit relations between the topological structure of the regulatory network, and the growth rate of the dynamical complexity can be established. We illustrate our results by means of some biologically motivated examples.Comment: 28 pages, 4 figure

    On the Time Dependence of Holographic Complexity

    Get PDF
    We evaluate the full time dependence of holographic complexity in various eternal black hole backgrounds using both the complexity=action (CA) and the complexity=volume (CV) conjectures. We conclude using the CV conjecture that the rate of change of complexity is a monotonically increasing function of time, which saturates from below to a positive constant in the late time limit. Using the CA conjecture for uncharged black holes, the holographic complexity remains constant for an initial period, then briefly decreases but quickly begins to increase. As observed previously, at late times, the rate of growth of the complexity approaches a constant, which may be associated with Lloyd's bound on the rate of computation. However, we find that this late time limit is approached from above, thus violating the bound. Adding a charge to the eternal black holes washes out the early time behaviour, i.e., complexity immediately begins increasing with sufficient charge, but the late time behaviour is essentially the same as in the neutral case. We also evaluate the complexity of formation for charged black holes and find that it is divergent for extremal black holes, implying that the states at finite chemical potential and zero temperature are infinitely more complex than their finite temperature counterparts.Comment: 52+31 pages, 30 figure

    Time Complexity of Decentralized Fixed-Mode Verification

    Get PDF
    Given an interconnected system, this note is concerned with the time complexity of verifying whether an unrepeated mode of the system is a decentralized fixed mode (DFM). It is shown that checking the decentralized fixedness of any distinct mode is tantamount to testing the strong connectivity of a digraph formed based on the system. It is subsequently proved that the time complexity of this decision problem using the proposed approach is the same as the complexity of matrix multiplication. This work concludes that the identification of distinct DFMs (by means of a deterministic algorithm, rather than a randomized one) is computationally very easy, although the existing algorithms for solving this problem would wrongly imply that it is cumbersome. This note provides not only a complexity analysis, but also an efficient algorithm for tackling the underlying problem

    Verifying Time Complexity of Deterministic Turing Machines

    Full text link
    We show that, for all reasonable functions T(n)=o(nlogn)T(n)=o(n\log n), we can algorithmically verify whether a given one-tape Turing machine runs in time at most T(n)T(n). This is a tight bound on the order of growth for the function TT because we prove that, for T(n)(n+1)T(n)\geq(n+1) and T(n)=Ω(nlogn)T(n)=\Omega(n\log n), there exists no algorithm that would verify whether a given one-tape Turing machine runs in time at most T(n)T(n). We give results also for the case of multi-tape Turing machines. We show that we can verify whether a given multi-tape Turing machine runs in time at most T(n)T(n) iff T(n0)<(n0+1)T(n_0)< (n_0+1) for some n0Nn_0\in\mathbb{N}. We prove a very general undecidability result stating that, for any class of functions F\mathcal{F} that contains arbitrary large constants, we cannot verify whether a given Turing machine runs in time T(n)T(n) for some TFT\in\mathcal{F}. In particular, we cannot verify whether a Turing machine runs in constant, polynomial or exponential time.Comment: 18 pages, 1 figur
    corecore