622 research outputs found

    Expressive power and complexity of a logic with quantifiers that count proportions of sets

    Get PDF
    We present a second-order logic of proportional quantifiers, SOLP, which is essentially a first-order language extended with quantifiers that act upon second-order variables of a given arity r and count the fraction of elements in a subset of r-tuples of a model that satisfy a formula. Our logic is capable of expressing proportional versions of different problems of complexity up to NP-hard as, for example, the problem of deciding if at least a fraction 1/n of the set of vertices of a graph form a clique; and fragments within our logic capture complexity classes as NL and P, with auxiliary ordering relation. When restricted to monadic second-order variables, our logic of proportional quantifiers admits a semantic approximation based on almost linear orders, which is not as weak as other known logics with counting quantifiers (restricted to almost orders), for it does not have the bounded number of degrees property. Moreover, we show that, in this almost-ordered setting, different fragments of this logic vary in their expressive power, and show the existence of an infinite hierarchy inside our monadic language. We extend our inexpressibility result of almost-ordered structure to a fragment of SOLP, which in the presence of full order captures P. To obtain all our inexpressibility results, we developed combinatorial games appropriate for these logics, whose application could go beyond the almost-ordered models and hence are interesting by themselves.Peer ReviewedPreprin

    Approximate formulae for a logic that capture classes of computational complexity

    Get PDF
    This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Logic Journal of IGPL following peer review. The definitive publisher-authenticated version Arratia, Argimiro; Ortiz, Carlos E. Approximate formulae for a logic that capture classes of computational complexity. Logic Journal of IGPL, 2009, vol. 17, p. 131-154 is available online at: http://jigpal.oxfordjournals.org/cgi/reprint/17/1/131?maxtoshow=&hits=10&RESULTFORMAT=&fulltext=Approximate+formulae+for+a+logic+that+capture+classes+of+computational+complexity&searchid=1&FIRSTINDEX=0&resourcetype=HWCITThis paper presents a syntax of approximate formulae suited for the logic with counting quantifiers SOLP. This logic was formalised by us in [1] where, among other properties, we showed the following facts: (i) In the presence of a built–in (linear) order, SOLP can describe NP–complete problems and some of its fragments capture the classes P and NL; (ii) weakening the ordering relation to an almost order we can separate meaningful fragments, using a combinatorial tool adapted to these languages. The purpose of our approximate formulae is to provide a syntactic approximation to the logic SOLP, enhanced with a built-in order, that should be complementary of the semantic approximation based on almost orders, by means of producing logics where problems are syntactically described within a small counting error. We introduce a concept of strong expressibility based on approximate formulae, and show that for many fragments of SOLP with built-in order, including ones that capture P and NL, expressibility and strong expressibility are equivalent. We state and prove a Bridge Theorem that links expressibility in fragments of SOLP over almost-ordered structures to strong expressibility with respect to approximate formulae for the corresponding fragments over ordered structures. A consequence of these results is that proving inexpressibility over fragments of SOLP with built-in order could be done by proving inexpressibility over the corresponding fragments with built-in almost order, where separation proofs are allegedly easier.Peer ReviewedPostprint (author’s final draft

    Naming the largest number: Exploring the boundary between mathematics and the philosophy of mathematics

    Full text link
    What is the largest number accessible to the human imagination? The question is neither entirely mathematical nor entirely philosophical. Mathematical formulations of the problem fall into two classes: those that fail to fully capture the spirit of the problem, and those that turn it back into a philosophical problem

    A Framework for the Organization and Discovery of Information Resources in a WWW Environment Using Association, Classification and Deduction

    Get PDF
    The Semantic Web is envisioned as a next-generation WWW environment in which information is given well-defined meaning. Although the standards for the Semantic Web are being established, it is as yet unclear how the Semantic Web will allow information resources to be effectively organized and discovered in an automated fashion. This dissertation research explores the organization and discovery of resources for the Semantic Web. It assumes that resources on the Semantic Web will be retrieved based on metadata and ontologies that will provide an effective basis for automated deduction. An integrated deduction system based on the Resource Description Framework (RDF), the DARPA Agent Markup Language (DAML) and description logic (DL) was built. A case study was conducted to study the system effectiveness in retrieving resources in a large Web resource collection. The results showed that deduction has an overall positive impact on the retrieval of the collection over the defined queries. The greatest positive impact occurred when precision was perfect with no decrease in recall. The sensitivity analysis was conducted over properties of resources, subject categories, query expressions and relevance judgment in observing their relationships with the retrieval performance. The results highlight both the potentials and various issues in applying deduction over metadata and ontologies. Further investigation will be required for additional improvement. The factors that can contribute to degraded performance were identified and addressed. Some guidelines were developed based on the lessons learned from the case study for the development of Semantic Web data and systems

    From truth conditions to processes: how to model the processing difficulty of quantified sentences based on semantic theory

    Get PDF
    The present dissertation is concerned with the processing difficulty of quantified sentences and how it can be modeled based on semantic theory. Processing difficulty of quantified sentences is assessed using psycholinguistic methods such as systematically collecting truth-value judgments or recording eye movements during reading. Predictions are derived from semantic theory via parsimonious processing assumptions, taking into account automata theory, signal detection theory and computational complexity. Chapter 1 provides introductory discussion and overview. Chapter 2 introduces basic theoretical concepts that are used throughout the rest of the dissertation. In chapter 3, processing difficulty is approached on an abstract level. The difficulty of the truth evaluation of reciprocal sentences with generalized quantifiers as antecedents is classified using computational complexity theory. This is independent of the actual algorithms or procedures that are used to evaluate the sentences. One production and one sentence picture verification experiment are reported which tested whether cognitive capacities are limited to those functions that are computationally tractable. The results indicate that intractable interpretations occur in language comprehension but also that their verification rapidly exceeds cognitive capacities in case the verification problem cannot be solved using simple heuristics. Chapter 4 discusses two common approaches to model the canonical verification procedures associated with quantificational sentences. The first is based on the semantic automata model which conceives of quantifiers as decision problems and characterizes the computational resources that are needed to solve them. The second approach is based on the interface transparency thesis, which stipulates a transparent interface between semantic representations and the realization of verification procedures in the general cognitive architecture. Both approaches are evaluated against experimental data. Chapter 5 focuses on a test case that is challenging for both of these approaches. In particular, increased processing difficulty of `more than n‘ as compared to `fewer than n‘ is investigated. A processing model is proposed which integrates insights from formal semantics with models from cognitive psychology. This model can be seen as implementation and extension of the interface transparency thesis. The truth evaluation process is conceived of as a stochastic process as described in sequential sampling models of decision making. The increased difficulty of `fewer than n’ as compared to `more than n’ is attributed to an extra processing step of scale-reversal that precedes the actual decision process. Predictions of the integrated processing model are tested and confirmed in two sentence-picture verification experiments. Chapter 6 discusses whether and how the integrated processing model can be extended to other quantifiers. An extension to proportional comparative quantifiers, like `fewer than half’ and `more than half’ is proposed and discussed in the light of existing experimental data. Moreover, it is shown that what are called empty-set effects can be naturally derived from the model. Chapter 7 presents data from two eye tracking experiments that show that `fewer than’ leads to increased difficulty as compared to `more than’ already during reading. Moreover, this effect is magnified if such quantifiers are combined with overt negation. Potential accounts of these findings are discussed. Conclusions are summarized in chapter 8

    Toward a Kripkean Concept of Number

    Full text link
    Saul Kripke once remarked to me that natural numbers cannot be posits inferred from their indispensability to science, since we’ve always had them. This left me wondering whether numbers are objects of Russellian acquaintance, or accessible by analysis, being implied by known general principles about how to reason correctly, or both. To answer this question, I discuss some recent (and not so recent) work on our concepts of number and of particular numbers, by leading psychologists and philosophers. Special attention is paid to Kripke’s theory that numbers possess structural features of the numerical systems that stand for them, and to the relation between his proposal about numbers and his doctrine that there are contingent truths known a priori. My own proposal, to which Kripke is sympathetic, is that numbers are properties of sets. I argue for this by showing the extent to which it can avoid the problems that plague the various views under discussion, including the problems raised by Kripke against Frege. I also argue that while the terms ‘the number of F’s’, ‘natural number’ and ‘0’, ‘1’, ‘2’ etc. are partially understood by the folk, they can only be fully understood by reflection and analysis, including reflection on how to reason correctly. In this last respect my thesis is a retreat position from logicism. I also show how it dovetails with an account of how numbers are actually grasped in practice, via numerical systems, and in virtue of a certain structural affinity between a geometric pattern that we grasp intuitively, and our fully analyzed concepts of numbers. I argue that none of this involves acquaintance with numbers
    • …
    corecore