491 research outputs found

    Modeling Martin Löf Type Theory in Categories

    Get PDF
    International audienceWe present a model of Martin-Lof type theory that includes both dependent products and the identity type. It is based on the category of small categories, with cloven Grothendieck bifibrations used to model dependent types. The identity type is modeled by a path functor that seems to have independent interest from the point of view of homotopy theory. We briefly describe this model's strengths and limitations

    Initial Semantics for Reduction Rules

    Get PDF
    We give an algebraic characterization of the syntax and operational semantics of a class of simply-typed languages, such as the language PCF: we characterize simply-typed syntax with variable binding and equipped with reduction rules via a universal property, namely as the initial object of some category of models. For this purpose, we employ techniques developed in two previous works: in the first work we model syntactic translations between languages over different sets of types as initial morphisms in a category of models. In the second work we characterize untyped syntax with reduction rules as initial object in a category of models. In the present work, we combine the techniques used earlier in order to characterize simply-typed syntax with reduction rules as initial object in a category. The universal property yields an operator which allows to specify translations---that are semantically faithful by construction---between languages over possibly different sets of types. As an example, we upgrade a translation from PCF to the untyped lambda calculus, given in previous work, to account for reduction in the source and target. Specifically, we specify a reduction semantics in the source and target language through suitable rules. By equipping the untyped lambda calculus with the structure of a model of PCF, initiality yields a translation from PCF to the lambda calculus, that is faithful with respect to the reduction semantics specified by the rules. This paper is an extended version of an article published in the proceedings of WoLLIC 2012.Comment: Extended version of arXiv:1206.4547, proves a variant of a result of PhD thesis arXiv:1206.455

    Generalized Universe Hierarchies and First-Class Universe Levels

    Get PDF
    In type theories, universe hierarchies are commonly used to increase the expressive power of the theory while avoiding inconsistencies arising from size issues. There are numerous ways to specify universe hierarchies, and theories may differ in details of cumulativity, choice of universe levels, specification of type formers and eliminators, and available internal operations on levels. In the current work, we aim to provide a framework which covers a large part of the design space. First, we develop syntax and semantics for cumulative universe hierarchies, where levels may come from any set equipped with a transitive well-founded ordering. In the semantics, we show that induction-recursion can be used to model transfinite hierarchies, and also support lifting operations on type codes which strictly preserve type formers. Then, we consider a setup where universe levels are first-class types and subject to arbitrary internal reasoning. This generalizes the bounded polymorphism features of Coq and at the same time the internal level computations in Agda

    The assessment of procedural skills in physiotherapy education: A measurement study using the Rasch model

    Get PDF
    Gillian Baer - ORCID 0000-0002-1528-2851 https://orcid.org/0000-0002-1528-2851Replaced AM with VoR 2020-05-26Background: Procedural skills are a key element in the training of future physiotherapists. Procedural skills relate to the acquisition of appropriate motor skills, which allow the safe application of clinical procedures to patients. In order to evaluate procedural skills in physiotherapy education validated assessment instruments are required. Recently the assessment of procedural skills in physiotherapy education (APSPT) tool was developed. The overall aim of this study was to establish the structural validity of the APSPT. In order to do this the following objectives were examined: i) the fit of the items of APSPT to the Rasch-model, ii) the fit of the overall score to the Rasch model, iii) the difficulty of each test item and iv) whether the difficulty levels of the individual test items cover the whole capacity spectrum of students in pre-registration physiotherapy education.Methods: For this observational cross-sectional measurement properties study a convenience sample of 69 undergraduate pre-registration physiotherapy students of the HES-SO Valais-Wallis was recruited. Participants were instructed to perform a task procedure on a simulated patient. The performance was evaluated with the APSPT. A conditional maximum likelihood approach was used to estimate the parameters of a partial credit model for polytomous item responses. Item fit, ordering of thresholds, targeting and goodness of fit to the Rasch model was assessed.Results: Item fit statistics showed that 25 items of the APSPT showed adequate fit to the Rasch model. Disordering of item thresholds did not occur and the targeting of the APSPT was adequate to measure the abilities of the included participants. Undimensionality and subgroup homogeneity were confirmed.Conclusion: This study presented evidence for the structural validity of the APSPT. Undimensionality of the APSPT was confirmed and therefore presents evidence that the latent dimension of procedural skills in physiotherapy education consists of several subcategories. However, the results should be interpreted with caution given the small sample size.https://doi.org/10.1186/s40945-020-00080-010pubpu

    Identification of probabilities

    Get PDF
    Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology
    • …
    corecore