9,099 research outputs found

    Do Goedel's incompleteness theorems set absolute limits on the ability of the brain to express and communicate mental concepts verifiably?

    Full text link
    Classical interpretations of Goedel's formal reasoning imply that the truth of some arithmetical propositions of any formal mathematical language, under any interpretation, is essentially unverifiable. However, a language of general, scientific, discourse cannot allow its mathematical propositions to be interpreted ambiguously. Such a language must, therefore, define mathematical truth verifiably. We consider a constructive interpretation of classical, Tarskian, truth, and of Goedel's reasoning, under which any formal system of Peano Arithmetic is verifiably complete. We show how some paradoxical concepts of Quantum mechanics can be expressed, and interpreted, naturally under a constructive definition of mathematical truth.Comment: 73 pages; this is an updated version of the NQ essay; an HTML version is available at http://alixcomsi.com/Do_Goedel_incompleteness_theorems.ht

    Modeling Nonintersective Adjectives Using Operator Logics

    Get PDF
    Our topic is one that involves the interface between natural language and mathematical logic. First-order predicate language/logic does a good job approximating many parts of (English) speech, i.e., nouns, verbs and prepositions, but fails decidedly when it comes to, say, adjectives. In particular, it cannot account for the quite different ways in which the adjectives green and big modify a noun such as chair. In the former case, we can easily view a world in which the class of green chairs is the intersection of the class of green things with the class of chair-things. By contrast, the way big modifies a noun depends on the noun itself: a big chair is microscopic when compared to the smallest of galaxies. We investigate logical languages inspired by this phenomenon; particularly those with variables ranging over individuals and with variable-binding operators akin to generalized quantifiers

    Dimensional Crossover in the Large N Limit

    Full text link
    We consider dimensional crossover for an O(N)O(N) Landau-Ginzburg-Wilson model on a dd-dimensional film geometry of thickness LL in the large NN-limit. We calculate the full universal crossover scaling forms for the free energy and the equation of state. We compare the results obtained using ``environmentally friendly'' renormalization with those found using a direct, non-renormalization group approach. A set of effective critical exponents are calculated and scaling laws for these exponents are shown to hold exactly, thereby yielding non-trivial relations between the various thermodynamic scaling functions.Comment: 25 pages of PlainTe

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    Formal foundations for semantic theories of nominalisation

    Get PDF
    This paper develops the formal foundations of semantic theories dealing with various kinds of nominalisations. It introduces a combination of an event-calculus with a type-free theory which allows a compositional description to be given of such phenomena like Vendler's distinction between perfect and imperfect nominals, iteration of gerunds and Cresswell's notorious non-urrival of'the train examples. Moreover, the approach argued for in this paper allows a semantic explanation to be given for a wide range of grammatical observations such as the behaviour of certain tpes of nominals with respect to their verbal contexts or the distribution of negation in nominals

    Linear superposition as a core theorem of quantum empiricism

    Get PDF
    Clarifying the nature of the quantum state Ψ|\Psi\rangle is at the root of the problems with insight into (counterintuitive) quantum postulates. We provide a direct-and math-axiom free-empirical derivation of this object as an element of a vector space. Establishing the linearity of this structure-quantum superposition-is based on a set-theoretic creation of ensemble formations and invokes the following three principia: (I)(\textsf{I}) quantum statics, (II)(\textsf{II}) doctrine of a number in the physical theory, and (III)(\textsf{III}) mathematization of matching the two observations with each other; quantum invariance. All of the constructs rest upon a formalization of the minimal experimental entity: observed micro-event, detector click. This is sufficient for producing the C\mathbb C-numbers, axioms of linear vector space (superposition principle), statistical mixtures of states, eigenstates and their spectra, and non-commutativity of observables. No use is required of the concept of time. As a result, the foundations of theory are liberated to a significant extent from the issues associated with physical interpretations, philosophical exegeses, and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements; 70 pages(+2), further improvement

    Informal proof, formal proof, formalism

    Get PDF
    Increases in the use of automated theorem-provers have renewed focus on the relationship between the informal proofs normally found in mathematical research and fully formalised derivations. Whereas some claim that any correct proof will be underwritten by a fully formal proof, sceptics demur. In this paper I look at the relevance of these issues for formalism, construed as an anti-platonistic metaphysical doctrine. I argue that there are strong reasons to doubt that all proofs are fully formalisable, if formal proofs are required to be finitary, but that, on a proper view of the way in which formal proofs idealise actual practice, this restriction is unjustified and formalism is not threatened

    `The frozen accident' as an evolutionary adaptation: A rate distortion theory perspective on the dynamics and symmetries of genetic coding mechanisms

    Get PDF
    We survey some interpretations and related issues concerning the frozen hypothesis due to F. Crick and how it can be explained in terms of several natural mechanisms involving error correction codes, spin glasses, symmetry breaking and the characteristic robustness of genetic networks. The approach to most of these questions involves using elements of Shannon's rate distortion theory incorporating a semantic system which is meaningful for the relevant alphabets and vocabulary implemented in transmission of the genetic code. We apply the fundamental homology between information source uncertainty with the free energy density of a thermodynamical system with respect to transcriptional regulators and the communication channels of sequence/structure in proteins. This leads to the suggestion that the frozen accident may have been a type of evolutionary adaptation

    Information structure and the referential status of linguistic expression : workshop as part of the 23th annual meetings of the Deutsche Gesellschaft für Sprachwissenschaft in Leipzig, Leipzig, February 28 - March 2, 2001

    Get PDF
    This volume comprises papers that were given at the workshop Information Structure and the Referential Status of Linguistic Expressions, which we organized during the Deutsche Gesellschaft für Sprachwissenschaft (DGfS) Conference in Leipzig in February 2001. At this workshop we discussed the connection between information structure and the referential interpretation of linguistic expressions, a topic mostly neglected in current linguistics research. One common aim of the papers is to find out to what extent the focus-background as well as the topic-comment structuring determine the referential interpretation of simple arguments like definite and indefinite NPs on the one hand and sentences on the other

    The role of positivity and causality in interactions involving higher spin

    Get PDF
    It is shown that the recently introduced positivity and causality preserving string-local quantum field theory (SLFT) resolves most No-Go situations in higher spin problems. This includes in particular the Velo–Zwanziger causality problem which turns out to be related in an interesting way to the solution of zero mass Weinberg–Witten issue. In contrast to the indefinite metric and ghosts of gauge theory, SLFT uses only positivity-respecting physical degrees of freedom. The result is a fully Lorentz-covariant and causal string field theory in which light- or space-like linear strings transform covariant under Lorentz transformation. The cooperation of causality and quantum positivity in the presence of interacting particles leads to remarkable conceptual changes. It turns out that the presence of H-selfinteractions in the Higgs model is not the result of SSB on a postulated Mexican hat potential, but solely the consequence of the implementation of positivity and causality. These principles (and not the imposed gauge symmetry) account also for the Lie-algebra structure of the leading contributions of selfinteracting vector mesons. Second order consistency of selfinteracting vector mesons in SLFT requires the presence of H-particles; this, and not SSB, is the raison d'être for H. The basic conceptual and calculational tool of SLFT is the S-matrix. Its string-independence is a powerful restriction which determines the form of interaction densities in terms of the model-defining particle content and plays a fundamental role in the construction of pl observables and sl interpolating fields
    corecore