4,488,574 research outputs found

    Time complexity and gate complexity

    Full text link
    We formulate and investigate the simplest version of time-optimal quantum computation theory (t-QCT), where the computation time is defined by the physical one and the Hamiltonian contains only one- and two-qubit interactions. This version of t-QCT is also considered as optimality by sub-Riemannian geodesic length. The work has two aims: one is to develop a t-QCT itself based on physically natural concept of time, and the other is to pursue the possibility of using t-QCT as a tool to estimate the complexity in conventional gate-optimal quantum computation theory (g-QCT). In particular, we investigate to what extent is true the statement: time complexity is polynomial in the number of qubits if and only if so is gate complexity. In the analysis, we relate t-QCT and optimal control theory (OCT) through fidelity-optimal computation theory (f-QCT); f-QCT is equivalent to t-QCT in the limit of unit optimal fidelity, while it is formally similar to OCT. We then develop an efficient numerical scheme for f-QCT by modifying Krotov's method in OCT, which has monotonic convergence property. We implemented the scheme and obtained solutions of f-QCT and of t-QCT for the quantum Fourier transform and a unitary operator that does not have an apparent symmetry. The former has a polynomial gate complexity and the latter is expected to have exponential one because a series of generic unitary operators has a exponential gate complexity. The time complexity for the former is found to be linear in the number of qubits, which is understood naturally by the existence of an upper bound. The time complexity for the latter is exponential. Thus the both targets are examples satisfyng the statement above. The typical characteristics of the optimal Hamiltonians are symmetry under time-reversal and constancy of one-qubit operation, which are mathematically shown to hold in fairly general situations.Comment: 11 pages, 6 figure

    On Descriptive Complexity, Language Complexity, and GB

    Get PDF
    We introduce LK,P2L^2_{K,P}, a monadic second-order language for reasoning about trees which characterizes the strongly Context-Free Languages in the sense that a set of finite trees is definable in LK,P2L^2_{K,P} iff it is (modulo a projection) a Local Set---the set of derivation trees generated by a CFG. This provides a flexible approach to establishing language-theoretic complexity results for formalisms that are based on systems of well-formedness constraints on trees. We demonstrate this technique by sketching two such results for Government and Binding Theory. First, we show that {\em free-indexation\/}, the mechanism assumed to mediate a variety of agreement and binding relationships in GB, is not definable in LK,P2L^2_{K,P} and therefore not enforcible by CFGs. Second, we show how, in spite of this limitation, a reasonably complete GB account of English can be defined in LK,P2L^2_{K,P}. Consequently, the language licensed by that account is strongly context-free. We illustrate some of the issues involved in establishing this result by looking at the definition, in LK,P2L^2_{K,P}, of chains. The limitations of this definition provide some insight into the types of natural linguistic principles that correspond to higher levels of language complexity. We close with some speculation on the possible significance of these results for generative linguistics.Comment: To appear in Specifying Syntactic Structures, papers from the Logic, Structures, and Syntax workshop, Amsterdam, Sept. 1994. LaTeX source with nine included postscript figure

    Hamiltonian complexity

    Full text link
    In recent years we've seen the birth of a new field known as hamiltonian complexity lying at the crossroads between computer science and theoretical physics. Hamiltonian complexity is directly concerned with the question: how hard is it to simulate a physical system? Here I review the foundational results, guiding problems, and future directions of this emergent field.Comment: 14 page

    Complexity as Process: Complexity Inspired Approaches to Composition

    Get PDF
    This article examines the use of Complexity Theory as an inspiration for the creation of new musical works, and highlights problems and possible solutions associated with its application as a compositional tool. In particular it explores how the philosophy behind Complexity Theory affects notions of process-based composition, indeterminacy in music and the performer/listener/environment relationship, culminating in providing a basis for the understanding of music creation as an active process within a context. The author presents one of his own sound installations, Cross-Pollination, as an example of a composition inspired and best understood from the philosophical position as described in Complexity Theory

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page
    corecore