209,994 research outputs found

    Extended Initiality for Typed Abstract Syntax

    Full text link
    Initial Semantics aims at interpreting the syntax associated to a signature as the initial object of some category of 'models', yielding induction and recursion principles for abstract syntax. Zsid\'o proves an initiality result for simply-typed syntax: given a signature S, the abstract syntax associated to S constitutes the initial object in a category of models of S in monads. However, the iteration principle her theorem provides only accounts for translations between two languages over a fixed set of object types. We generalize Zsid\'o's notion of model such that object types may vary, yielding a larger category, while preserving initiality of the syntax therein. Thus we obtain an extended initiality theorem for typed abstract syntax, in which translations between terms over different types can be specified via the associated category-theoretic iteration operator as an initial morphism. Our definitions ensure that translations specified via initiality are type-safe, i.e. compatible with the typing in the source and target language in the obvious sense. Our main example is given via the propositions-as-types paradigm: we specify propositions and inference rules of classical and intuitionistic propositional logics through their respective typed signatures. Afterwards we use the category--theoretic iteration operator to specify a double negation translation from the former to the latter. A second example is given by the signature of PCF. For this particular case, we formalize the theorem in the proof assistant Coq. Afterwards we specify, via the category-theoretic iteration operator, translations from PCF to the untyped lambda calculus

    Automating Fine Concurrency Control in Object-Oriented Databases

    Get PDF
    Several propositions were done to provide adapted concurrency control to object-oriented databases. However, most of these proposals miss the fact that considering solely read and write access modes on instances may lead to less parallelism than in relational databases! This paper cope with that issue, and advantages are numerous: (1) commutativity of methods is determined a priori and automatically by the compiler, without measurable overhead, (2) run-time checking of commutativity is as efficient as for compatibility, (3) inverse operations need not be specified for recovery, (4) this scheme does not preclude more sophisticated approaches, and, last but not least, (5) relational and object-oriented concurrency control schemes with read and write access modes are subsumed under this proposition

    Formalising the multidimensional nature of social networks

    Get PDF
    Individuals interact with conspecifics in a number of behavioural contexts or dimensions. Here, we formalise this by considering a social network between n individuals interacting in b behavioural dimensions as a nxnxb multidimensional object. In addition, we propose that the topology of this object is driven by individual needs to reduce uncertainty about the outcomes of interactions in one or more dimension. The proposal grounds social network dynamics and evolution in individual selection processes and allows us to define the uncertainty of the social network as the joint entropy of its constituent interaction networks. In support of these propositions we use simulations and natural 'knock-outs' in a free-ranging baboon troop to show (i) that such an object can display a small-world state and (ii) that, as predicted, changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalisation of social networks provides a framework within which to predict network dynamics and evolution under the assumption that it is driven by individuals seeking to reduce the uncertainty of their social environment.Comment: 16 pages, 4 figure

    The Ontology of Reference: Studies in Logic and Phenomenology

    Get PDF
    Abstract: We propose a dichotomy between object-entities and meaning-entities. The former are entities such as molecules, cells, organisms, organizations, numbers, shapes, and so forth. The latter are entities such as concepts, propositions, and theories belonging to the realm of logic. Frege distinguished analogously between a ‘realm of reference’ and a ‘realm of sense’, which he presented in some passages as mutually exclusive. This however contradicts his assumption elsewhere that every entity is a referent (even Fregean senses can be referred to by means of suitably constructed expressions). We apply the meaning/object dichotomy to mathematical and fictional entities, and develop a view of mathematical and other abstract objects as the results of certain types of demarcation – as for example the North Sea is the result of demarcations built into naval charts. Such demarcations reflect demarcatory acts, which presuppose complex cognitive and social structures enabling the creation of maps, of theories (of mathematics, of natural science), and of novels

    Transformations for compositional data with zeros with an application to forensic evidence evaluation

    Get PDF
    In forensic science likelihood ratios provide a natural way of computing the value of evidence under competing propositions such as "the compared samples have originated from the same object" (prosecution) and "the compared samples have originated from different objects" (defence). We use a two-level multivariate likelihood ratio model for comparison of forensic glass evidence in the form of elemental composition data under three data transformations: the logratio transformation, a complementary log-log type transformation and a hyperspherical transformation. The performances of the three transformations in the evaluation of evidence are assessed in simulation experiments through use of the proportions of false negatives and false positives
    corecore