7 research outputs found

    Towards the entropy-limit conjecture

    Get PDF
    The maximum entropy principle is widely used to determine non-committal probabilities on a finite domain, subject to a set of constraints, but its application to continuous domains is notoriously problematic. This paper concerns an intermediate case, where the domain is a first-order predicate language. Two strategies have been put forward for applying the maximum entropy principle on such a domain: (i) applying it to finite sublanguages and taking the pointwise limit of the resulting probabilities as the size n of the sublanguage increases; (ii) selecting a probability function on the language as a whole whose entropy on finite sublanguages of size n is not dominated by that of any other probability function for sufficiently large n. The entropy-limit conjecture says that, where these two approaches yield determinate probabilities, the two methods yield the same probabilities. If this conjecture is found to be true, it would provide a boost to the project of seeking a single canonical inductive logic—a project which faltered when Carnap's attempts in this direction succeeded only in determining a continuum of inductive methods. The truth of the conjecture would also boost the project of providing a canonical characterisation of normal or default models of first-order theories. Hitherto, the entropy-limit conjecture has been verified for languages which contain only unary predicate symbols and also for the case in which the constraints can be captured by a categorical statement of quantifier complexity. This paper shows that the entropy-limit conjecture also holds for categorical statements of complexity, for various non-categorical constraints, and in certain other general situations

    Logics of belief

    Get PDF
    The inadequacy of the usual possible world semantics of modal languages when the meaning of 'belief' is attached to the modal operator is discussed. Three other approaches are then investigated. In the case of Moore's autoepistemic logic it becomes possible to compare an agent's beliefs to 'reality', which cannot be done directly in the possible world semantics. Levesque's semantics makes explicit in the object language the notion of 'this is all the information the agent has', which plays an important role in nonmonotonic reasoning. Both of these approaches deal with ideal reasoners. The third approach, Konolige's deduction model, is based on a semantics capable of describing the beliefs of one or more resourcebounded agents. Finally, the AGM postulates for belief revision are discussed.Computer ScienceM.Sc. (Computer Science

    The Cognitive Functions of Language

    Get PDF
    Includes peer critique and author responses.This paper explores a variety of different versions of the thesis that natural language is involved in human thinking. It distinguishes amongst strong and weak forms of this thesis, dismissing some as implausibly strong and others as uninterestingly weak. Strong forms dismissed include the view that language is conceptually necessary for thought (endorsed by many philosophers) and the view that language is de facto the medium of all human conceptual thinking (endorsed by many philosophers and social scientists). Weak forms include the view that language is necessary for the acquisition of many human concepts, and the view that language can serve to scaffold human thought processes. The paper also discusses the thesis that language may be the medium of conscious propositional thinking, but argues that this cannot be its most fundamental cognitive role. The idea is then proposed that natural language is the medium for non-domain-specific thinking, serving to integrate the outputs of a variety of domain-specific conceptual faculties (or central-cognitive ‘quasi-modules’). Recent experimental evidence in support of this idea is reviewed, and the implications of the idea are discussed, especially for our conception of the architecture of human cognition. Finally, some further kinds of evidence which might serve to corroborate or refute the hypothesis are mentioned. The overall goal of the paper is to review a wide variety of accounts of the cognitive function of natural language, integrating a number of different kinds of evidence and theoretical consideration in order to propose and elaborate the most plausible candidate

    Generating New Beliefs From Old

    No full text
    In previous work [BGHK92, BGHK93], we have studied the random-worlds approach---a particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (first-order, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are based on well-known approaches, such as cross-entropy. We discuss general connections between the techniques and in particular show that, although conceptually and techn..

    Abstract Generating New Beliefs From Old

    No full text
    In previous work [BGHK92, BGHK93], we have studied the random-worlds approach—a particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (first-order, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are based on well-known approaches, such as cross-entropy. We discuss general connections between the techniques and in particular show that, although conceptually and technically quite different, all of the techniques give the same answer when applied to the random-worlds method.
    corecore