98 research outputs found

    Non-Turing computations via Malament-Hogarth space-times

    Get PDF
    We investigate the Church-Kalm\'ar-Kreisel-Turing Theses concerning theoretical (necessary) limitations of future computers and of deductive sciences, in view of recent results of classical general relativity theory. We argue that (i) there are several distinguished Church-Turing-type Theses (not only one) and (ii) validity of some of these theses depend on the background physical theory we choose to use. In particular, if we choose classical general relativity theory as our background theory, then the above mentioned limitations (predicted by these Theses) become no more necessary, hence certain forms of the Church-Turing Thesis cease to be valid (in general relativity). (For other choices of the background theory the answer might be different.) We also look at various ``obstacles'' to computing a non-recursive function (by relying on relativistic phenomena) published in the literature and show that they can be avoided (by improving the ``design'' of our future computer). We also ask ourselves, how all this reflects on the arithmetical hierarchy and the analytical hierarchy of uncomputable functions.Comment: Final, published version: 25 pages, LaTex with two eps-figures, journal reference adde

    Reflections on Mathematical Economics in the Algorithmic Mode

    Get PDF
    Non-standard analysis can be harnessed by the recursion theorist. But as a computable economist, the conundrums of the Löwenheim-Skolem theorem and the associated Skolem paradox, seem to pose insurmountable epistemological difficulties against the use of algorithmic non-standard analysis. Discontinuities can be tamed by recursive analysis. This particular kind of taming may be a way out of the formidable obstacles created by the difficulties of Diophantine Decision Problems. Methods of existence proofs, used by the classical mathematician - even if not invoking the axiom of choice - cannot be shown to be equivalent to the exhibition of an instance in the sense of a constructive proof. These issues were prompted by the fertile and critical contributions to this special issue.

    Consequences of price volatility in evaluating the benefits of liberalisation

    Get PDF
    Many computable general equilibrium models have been set up recently, in order to assess the benefits of trade liberalisation, especially in agriculture. Although figures magnitudes differ from one model to another, they cannot reach any other conclusion than positive benefits. On the other hand, historical experience shows that liberalisation, far from being a new idea, has been tried at several occasions during the two last centuries, repeatedly ending in crisis, and hasty return to various forms of protection. A possible explanation could be in the comparative static approach of most liberalisation proponents, and their neglect of dynamic aspects. Especially, because risk is necessarily tied with unfulfilled expectations, it should play a decisive role in modelling. A new model is developed along this line, showing the possibility of a chaotic price regime, which would prevent full liberalisation to be feasible.globalisation; risk ; volatility ; modelling ; trade ; agriculture ; Doha ; WTO;

    Descriptive Complexity, Computational Tractability, and the Logical and Cognitive Foundations of Mathematics

    Get PDF
    In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly work as computational models of human cognitive capacities. One interesting area of computational complexity theory is descriptive complexity, which connects the expressive strength of systems of logic with the computational complexity classes. In descriptive complexity theory, it is established that only first-order (classical) systems are connected to P, or one of its subclasses. Consequently, second-order systems of logic are considered to be computationally intractable, and may therefore seem to be unfit to model human cognitive capacities. This would be problematic when we think of the role of logic as the foundations of mathematics. In order to express many important mathematical concepts and systematically prove theorems involving them, we need to have a system of logic stronger than classical first-order logic. But if such a system is considered to be intractable, it means that the logical foundation of mathematics can be prohibitively complex for human cognition. In this paper I will argue, however, that this problem is the result of an unjustified direct use of computational complexity classes in cognitive modelling. Placing my account in the recent literature on the topic, I argue that the problem can be solved by considering computational complexity for humanly relevant problem solving algorithms and input sizes.Peer reviewe

    Logical Systems and Formal Languages: ÂżGenuine Cognitive Artifacts?

    Get PDF
    El lenguaje, como objeto abstracto de la prĂĄctica lingĂŒĂ­stica de hablar, tiene ciertos rasgos caracterĂ­sticos. La lĂłgica se ocupa de Ă©stos. Su tarea es explicar los rasgos fundamentales que gobiernan el papel inferencial y para ello postula entidades abstractas, las formas lĂłgicas. Éstas estĂĄn vinculadas con las funciones cognitivas de los hablantes. Los lenguajes formales pueden ser vistos como artefactos cognitivos que mejoran y modifican los procesos de razonamiento de los hablantes, asimismo como un proceso de evoluciĂłn cultural mediante el cual los seres humanos desarrollan artefactos que les permiten realizar tareas y resolver problemas de manera mĂĄs eficiente.The language, as an abstract object of the linguistic practice of speaking, has certain characteristic features. The logic deals with these. Its goal is to explain the fundamental properties which govern the inferential role and for that it postulates abstract entities, the logic forms. These are linked to the cognitive functions of speakers. Formal languages can be seen as cognitive artifacts that improve and modify the reasoning processes of speakers, also as a process of cultural evolution through which human beings develop artifacts that allow them to perform activities and solve problems more efficiently.Fil: Barta, Natividad Ludmila. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad de Buenos Aires; Argentin

    East-West Paths to Unconventional Computing

    Get PDF
    Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of ‘unconventional computing’ scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries.info:eu-repo/semantics/publishedVersio

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
    • 

    corecore