322,258 research outputs found

    Formal Theories for Linear Algebra

    Full text link
    We introduce two-sorted theories in the style of [CN10] for the complexity classes \oplusL and DET, whose complete problems include determinants over Z2 and Z, respectively. We then describe interpretations of Soltys' linear algebra theory LAp over arbitrary integral domains, into each of our new theories. The result shows equivalences of standard theorems of linear algebra over Z2 and Z can be proved in the corresponding theory, but leaves open the interesting question of whether the theorems themselves can be proved.Comment: This is a revised journal version of the paper "Formal Theories for Linear Algebra" (Computer Science Logic) for the journal Logical Methods in Computer Scienc

    Complex Economies have a Lateral Escape from the Poverty Trap

    Get PDF
    We analyze the decisive role played by the complexity of economic systems at the onset of the industrialization process of countries over the past 50 years. Our analysis of the input growth dynamics, considering a further dimension through a recently introduced measure of economic complexity, reveals that more differentiated and more complex economies face a lower barrier (in terms of GDP per capita) when starting the transition towards industrialization. As a consequence, we can extend the classical concept of a one-dimensional poverty trap, by introducing a two-dimensional poverty trap: a country will start the industrialization process if it is rich enough (as in neo-classical economic theories), complex enough (using this new dimension and laterally escaping from the poverty trap), or a linear combination of the two. This naturally leads to the proposal of a Complex Index of Relative Development (CIRD) which shows, when analyzed as a function of the growth due to input, a shape of an upside down parabola similar to that expected from the standard economic theories when considering only the GDP per capita dimension

    Logical consequence in modal logic II: Some semantic systems for S4

    Get PDF
    ABSTRACT: This 1974 paper builds on our 1969 paper (Corcoran-Weaver [2]). Here we present three (modal, sentential) logics which may be thought of as partial systematizations of the semantic and deductive properties of a sentence operator which expresses certain kinds of necessity. The logical truths [sc. tautologies] of these three logics coincide with one another and with those of standard formalizations of Lewis's S5. These logics, when regarded as logistic systems (cf. Corcoran [1], p. 154), are seen to be equivalent; but, when regarded as consequence systems (ibid., p. 157), one diverges from the others in a fashion which suggests that two standard measures of semantic complexity may not be as closely linked as previously thought. This 1974 paper uses the linear notation for natural deduction presented in [2]: each two-dimensional deduction is represented by a unique one-dimensional string of characters. Thus obviating need for two-dimensional trees, tableaux, lists, and the like—thereby facilitating electronic communication of natural deductions. The 1969 paper presents a (modal, sentential) logic which may be thought of as a partial systematization of the semantic and deductive properties of a sentence operator which expresses certain kinds of necessity. The logical truths [sc. tautologies] of this logic coincides those of standard formalizations of Lewis’s S4. Among the paper's innovations is its treatment of modal logic in the setting of natural deduction systems--as opposed to axiomatic systems. The author’s apologize for the now obsolete terminology. For example, these papers speak of “a proof of a sentence from a set of premises” where today “a deduction of a sentence from a set of premises” would be preferable. 1. Corcoran, John. 1969. Three Logical Theories, Philosophy of Science 36, 153–77. J P R 2. Corcoran, John and George Weaver. 1969. Logical Consequence in Modal Logic: Natural Deduction in S5 Notre Dame Journal of Formal Logic 10, 370–84. MR0249278 (40 #2524). 3. Weaver, George and John Corcoran. 1974. Logical Consequence in Modal Logic: Some Semantic Systems for S4, Notre Dame Journal of Formal Logic 15, 370–78. MR0351765 (50 #4253)

    Computation in generalised probabilistic theories

    Get PDF
    From the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that BQP is in AWPP. This work investigates limits on computational power that are imposed by physical principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusion still holds? We show that given only an assumption of tomographic locality (roughly, that multipartite states can be characterised by local measurements), efficient computations are contained in AWPP. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class is equal to PP. Given only the assumption of tomographic locality, the inclusion in PP still holds for post-selected computation in general theories. Thus in a world with post-selection, quantum theory is optimal for computation in the space of all general theories. We then consider if relativised complexity results can be obtained for general theories. It is not clear how to define a sensible notion of an oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a `classical oracle'. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption and tomographic locality does not include NP.Comment: 14+9 pages. Comments welcom

    Measuring the intangibles: a metrics for the economic complexity of countries and products

    Get PDF
    We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation

    The Small-Is-Very-Small Principle

    Full text link
    The central result of this paper is the small-is-very-small principle for restricted sequential theories. The principle says roughly that whenever the given theory shows that a property has a small witness, i.e. a witness in every definable cut, then it shows that the property has a very small witness: i.e. a witness below a given standard number. We draw various consequences from the central result. For example (in rough formulations): (i) Every restricted, recursively enumerable sequential theory has a finitely axiomatized extension that is conservative w.r.t. formulas of complexity ≤n\leq n. (ii) Every sequential model has, for any nn, an extension that is elementary for formulas of complexity ≤n\leq n, in which the intersection of all definable cuts is the natural numbers. (iii) We have reflection for Σ20\Sigma^0_2-sentences with sufficiently small witness in any consistent restricted theory UU. (iv) Suppose UU is recursively enumerable and sequential. Suppose further that every recursively enumerable and sequential VV that locally inteprets UU, globally interprets UU. Then, UU is mutually globally interpretable with a finitely axiomatized sequential theory. The paper contains some careful groundwork developing partial satisfaction predicates in sequential theories for the complexity measure depth of quantifier alternations

    Consistency of circuit lower bounds with bounded theories

    Get PDF
    Proving that there are problems in PNP\mathsf{P}^\mathsf{NP} that require boolean circuits of super-linear size is a major frontier in complexity theory. While such lower bounds are known for larger complexity classes, existing results only show that the corresponding problems are hard on infinitely many input lengths. For instance, proving almost-everywhere circuit lower bounds is open even for problems in MAEXP\mathsf{MAEXP}. Giving the notorious difficulty of proving lower bounds that hold for all large input lengths, we ask the following question: Can we show that a large set of techniques cannot prove that NP\mathsf{NP} is easy infinitely often? Motivated by this and related questions about the interaction between mathematical proofs and computations, we investigate circuit complexity from the perspective of logic. Among other results, we prove that for any parameter k≥1k \geq 1 it is consistent with theory TT that computational class C⊈i.o.SIZE(nk){\mathcal C} \not \subseteq \textit{i.o.}\mathrm{SIZE}(n^k), where (T,C)(T, \mathcal{C}) is one of the pairs: T=T21T = \mathsf{T}^1_2 and C=PNP{\mathcal C} = \mathsf{P}^\mathsf{NP}, T=S21T = \mathsf{S}^1_2 and C=NP{\mathcal C} = \mathsf{NP}, T=PVT = \mathsf{PV} and C=P{\mathcal C} = \mathsf{P}. In other words, these theories cannot establish infinitely often circuit upper bounds for the corresponding problems. This is of interest because the weaker theory PV\mathsf{PV} already formalizes sophisticated arguments, such as a proof of the PCP Theorem. These consistency statements are unconditional and improve on earlier theorems of [KO17] and [BM18] on the consistency of lower bounds with PV\mathsf{PV}
    • …
    corecore