4,608 research outputs found

    The co-evolution of number concepts and counting words

    Get PDF
    Humans possess a number concept that differs from its predecessors in animal cognition in two crucial respects: (1) it is based on a numerical sequence whose elements are not confined to quantitative contexts, but can indicate cardinal/quantitative as well as ordinal and even nominal properties of empirical objects (e.g. ‘five buses’: cardinal; ‘the fifth bus’: ordinal; ‘the #5 bus’: nominal), and (2) it can involve recursion and, via recursion, discrete infinity. In contrast to that, the predecessors of numerical cognition that we find in animals and human infants rely on finite and iconic representations that are limited to cardinality and do not support a unified concept of number. In this paper, I argue that the way such a unified number concept could evolve in humans is via verbal sequences that are employed as numerical tools, that is, sequences of words whose elements are associated with empirical objects in number assignments. In particular, I show that a certain kind of number words, namely the counting sequences of natural languages, can be characterised as a central instance of verbal numerical tools. I describe a possible scenario for the emergence of such verbal numerical tools in human history that starts from iconic roots and that suggests that in a process of co-evolution, the gradual emergence of counting sequences and the development of an increasingly comprehensive number concept supported each other. On this account, it is language that opened the way for numerical cognition, suggesting that it is no accident that the same species that possesses the language faculty as a unique trait, should also be the one that developed a systematic concept of number

    Hyperations, Veblen progressions and transfinite iterations of ordinal functions

    Full text link
    In this paper we introduce hyperations and cohyperations, which are forms of transfinite iteration of ordinal functions. Hyperations are iterations of normal functions. Unlike iteration by pointwise convergence, hyperation preserves normality. The hyperation of a normal function f is a sequence of normal functions so that f^0= id, f^1 = f and for all ordinals \alpha, \beta we have that f^(\alpha + \beta) = f^\alpha f^\beta. These conditions do not determine f^\alpha uniquely; in addition, we require that the functions be minimal in an appropriate sense. We study hyperations systematically and show that they are a natural refinement of Veblen progressions. Next, we define cohyperations, very similar to hyperations except that they are left-additive: given \alpha, \beta, f^(\alpha + \beta)= f^\beta f^\alpha. Cohyperations iterate initial functions which are functions that map initial segments to initial segments. We systematically study cohyperations and see how they can be employed to define left inverses to hyperations. Hyperations provide an alternative presentation of Veblen progressions and can be useful where a more fine-grained analysis of such sequences is called for. They are very amenable to algebraic manipulation and hence are convenient to work with. Cohyperations, meanwhile, give a novel way to describe slowly increasing functions as often appear, for example, in proof theory

    Computational reverse mathematics and foundational analysis

    Get PDF
    Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert's program due to Simpson [1988], and predicativism in the extended form due to Feferman and Sch\"{u}tte. Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only ω\omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Π11\Pi^1_1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Π11-CA0\Pi^1_1\text{-}\mathsf{CA}_0.Comment: Submitted. 41 page
    • …
    corecore