448 research outputs found
A Mathematical Model of Quantum Computer by Both Arithmetic and Set Theory
A practical viewpoint links reality, representation, and language to calculation by the concept of Turing (1936) machine being the mathematical model of our computers. After the Gƶdel incompleteness theorems (1931) or the insolvability of the so-called halting problem (Turing 1936; Church 1936) as to a classical machine of Turing, one of the simplest hypotheses is completeness to be suggested for two ones. That is consistent with the provability of completeness by means of two independent Peano arithmetics discussed in Section I.
Many modifications of Turing machines cum quantum ones are researched in Section II for the Halting problem and completeness, and the model of two independent Turing machines seems to generalize them.
Then, that pair can be postulated as the formal definition of reality therefore being complete unlike any of them standalone, remaining incomplete without its complementary counterpart. Representation is formal defined as a one-to-one mapping between the two Turing machines, and the set of all those mappings can be considered as ālanguageā therefore including metaphors as mappings different than representation. Section III investigates that formal relation of ārealityā, ārepresentationā, and ālanguageā modeled by (at least two) Turing machines. The independence of (two) Turing machines is interpreted by means of game theory and especially of the Nash equilibrium in Section IV.
Choice and information as the quantity of choices are involved. That approach seems to be equivalent to that based on set theory and the concept of actual infinity in mathematics and allowing of practical implementations
Representation and Reality by Language: How to make a home quantum computer?
A set theory model of reality, representation and language based on the relation of completeness and incompleteness is explored. The problem of completeness of mathematics is linked to its counterpart in quantum mechanics. That model includes two Peano arithmetics or Turing machines independent of each other. The complex Hilbert space underlying quantum mechanics as the base of its mathematical formalism is interpreted as a generalization of Peano arithmetic: It is a doubled infinite set of doubled Peano arithmetics having a remarkable symmetry to the axiom of choice. The quantity of information is interpreted as the number of elementary choices (bits). Quantum information is seen as the generalization of information to infinite sets or series. The equivalence of that model to a quantum computer is demonstrated. The condition for the Turing machines to be independent of each other is reduced to the state of Nash equilibrium between them. Two relative models of language as game in the sense of game theory and as ontology of metaphors (all mappings, which are not one-to-one, i.e. not representations of reality in a formal sense) are deduced
Computability and analysis: the legacy of Alan Turing
We discuss the legacy of Alan Turing and his impact on computability and
analysis.Comment: 49 page
Undecidability in macroeconomics
In this paper we study the difficulty of solving problems in economics. For this purpose, we adopt the notion of undecidability from recursion theory. We show that certain problems in economics are undecidable, i.e., cannot be solved by a Turing Machine, a device that is at least as powerful as any computational device that can be constructed. In particular, we prove that even in finite closed economies subject to a variable initial condition, in which a social planner knows the behavior of every agent in the economy, certain important social planning problems are undecidable. Thus, it may be impossible to make effective policy decisions. Philosophically, this result formally brings into question the Rational Expectations Hypothesis which assumes that each agent is able to determine what it should do if it wishes to maximize its utility. We show that even when an optimal rational forecast exists for each agency (based on the information currently available to it), agents may lack the ability to make these forecasts. For example, Lucas describes economic models as 'mechanical, artificial world(s), populated by ... interacting robots'. Since any mechanical robot can be at most as computationally powerful as a Turing Machine, such economies are vulnerable to the phenomenon of undecidability
Algorithmic Complexity for Short Binary Strings Applied to Psychology: A Primer
Since human randomness production has been studied and widely used to assess
executive functions (especially inhibition), many measures have been suggested
to assess the degree to which a sequence is random-like. However, each of them
focuses on one feature of randomness, leading authors to have to use multiple
measures. Here we describe and advocate for the use of the accepted universal
measure for randomness based on algorithmic complexity, by means of a novel
previously presented technique using the the definition of algorithmic
probability. A re-analysis of the classical Radio Zenith data in the light of
the proposed measure and methodology is provided as a study case of an
application.Comment: To appear in Behavior Research Method
Why Philosophers Should Care About Computational Complexity
One might think that, once we know something is computable, how efficiently
it can be computed is a practical question with little further philosophical
importance. In this essay, I offer a detailed case that one would be wrong. In
particular, I argue that computational complexity theory---the field that
studies the resources (such as time, space, and randomness) needed to solve
computational problems---leads to new perspectives on the nature of
mathematical knowledge, the strong AI debate, computationalism, the problem of
logical omniscience, Hume's problem of induction, Goodman's grue riddle, the
foundations of quantum mechanics, economic rationality, closed timelike curves,
and several other topics of philosophical interest. I end by discussing aspects
of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and
beyond," MIT Press, 2012. Some minor clarifications and corrections; new
references adde
An Analysis of How Many Undiscovered Vulnerabilities Remain in Information Systems
Vulnerability management strategy, from both organizational and public policy
perspectives, hinges on an understanding of the supply of undiscovered
vulnerabilities. If the number of undiscovered vulnerabilities is small enough,
then a reasonable investment strategy would be to focus on finding and removing
the remaining undiscovered vulnerabilities. If the number of undiscovered
vulnerabilities is and will continue to be large, then a better investment
strategy would be to focus on quick patch dissemination and engineering
resilient systems. This paper examines a paradigm, namely that the number of
undiscovered vulnerabilities is manageably small, through the lens of
mathematical concepts from the theory of computing. From this perspective, we
find little support for the paradigm of limited undiscovered vulnerabilities.
We then briefly support the notion that these theory-based conclusions are
relevant to practical computers in use today. We find no reason to believe
undiscovered vulnerabilities are not essentially unlimited in practice and we
examine the possible economic impacts should this be the case. Based on our
analysis, we recommend vulnerability management strategy adopts an approach
favoring quick patch dissemination and engineering resilient systems, while
continuing good software engineering practices to reduce (but never eliminate)
vulnerabilities in information systems
- ā¦