73 research outputs found

    Thin Games with Symmetry and Concurrent Hyland-Ong Games

    Get PDF
    We build a cartesian closed category, called Cho, based on event structures. It allows an interpretation of higher-order stateful concurrent programs that is refined and precise: on the one hand it is conservative with respect to standard Hyland-Ong games when interpreting purely functional programs as innocent strategies, while on the other hand it is much more expressive. The interpretation of programs constructs compositionally a representation of their execution that exhibits causal dependencies and remembers the points of non-deterministic branching.The construction is in two stages. First, we build a compact closed category Tcg. It is a variant of Rideau and Winskel's category CG, with the difference that games and strategies in Tcg are equipped with symmetry to express that certain events are essentially the same. This is analogous to the underlying category of AJM games enriching simple games with an equivalence relations on plays. Building on this category, we construct the cartesian closed category Cho as having as objects the standard arenas of Hyland-Ong games, with strategies, represented by certain events structures, playing on games with symmetry obtained as expanded forms of these arenas.To illustrate and give an operational light on these constructions, we interpret (a close variant of) Idealized Parallel Algol in Cho

    Tensor types and their use in physics

    Full text link
    The content of this paper can be roughly organized into a three-level hierarchy of generality. At the first, most general level, we introduce a new language which allows us to express various categorical structures in a systematic and explicit manner in terms of so-called 2-schemes. Although 2-schemes can formalize categorical structures such as symmetric monoidal categories, they are not limited to this, and can be used to define structures with no categorical analogue. Most categorical structures come with an effective graphical calculus such as string diagrams for symmetric monoidal categories, and the same is true more generally for interesting 2-schemes. In this work, we focus on one particular non-categorical 2-scheme, whose instances we refer to as tensor types. At the second level of the hierarchy, we work out different flavors of this 2-scheme in detail. The effective graphical calculus of tensor types is that of tensor networks or Penrose diagrams, that is, string diagrams without a flow of time. As such, tensor types are similar to compact closed categories, though there are various small but potentially important differences. Also, the two definitions use completely different mechanisms despite both being examples of 2-schemes. At the third level of the hierarchy, we provide a long list of different families of concrete tensor types, in a way which makes them accessible to concrete computations, motivated by their potential use in physics. Different tensor types describe different types of physical models, such as classical or quantum physics, deterministic or statistical physics, many-body or single-body physics, or matter with or without symmetries or fermions

    In Search of Effectful Dependent Types

    Full text link
    Real world programming languages crucially depend on the availability of computational effects to achieve programming convenience and expressive power as well as program efficiency. Logical frameworks rely on predicates, or dependent types, to express detailed logical properties about entities. According to the Curry-Howard correspondence, programming languages and logical frameworks should be very closely related. However, a language that has both good support for real programming and serious proving is still missing from the programming languages zoo. We believe this is due to a fundamental lack of understanding of how dependent types should interact with computational effects. In this thesis, we make a contribution towards such an understanding, with a focus on semantic methods.Comment: PhD thesis, Version submitted to Exam School

    Symmetric Edit Lenses: A New Foundation for Bidirectional Languages

    Get PDF
    Lenses are bidirectional transformations between pairs of connected structures capable of translating an edit on one structure into an edit on the other. Most of the extensive existing work on lenses has focused on the special case of asymmetric lenses, where one structures is taken as primary and the other is thought of as a projection or view. Some symmetric variants exist, where each structure contains information not present in the other, but these all lack the basic operation of composition. Additionally, existing accounts do not represent edits carefully, making incremental operation difficult or producing unsatisfactory synchronization candidates. We present a new symmetric formulation which works with descriptions of changes to structures, rather than with the structures themselves. We construct a semantic space of edit lenses between “editable structures”—monoids of edits with a partial monoid action for applying edits—with natural laws governing their behavior. We present generalizations of a number of known constructions on asymmetric lenses and settle some longstanding questions about their properties—in particular, we prove the existence of (symmetric monoidal) tensor products and sums and the non-existence of full categorical products and sums in a category of lenses. Universal algebra shows how to build iterator lenses for structured data such as lists and trees, yielding lenses for operations like mapping, filtering, and concatenation from first principles. More generally, we provide mapping combinators based on the theory of containers. Finally, we present a prototype implementation of the core theory and take a first step in addressing the challenge of translating between user gestures and the internal representation of edits

    Sample Path Analysis of Integrate-and-Fire Neurons

    Get PDF
    Computational neuroscience is concerned with answering two intertwined questions that are based on the assumption that spatio-temporal patterns of spikes form the universal language of the nervous system. First, what function does a specific neural circuitry perform in the elaboration of a behavior? Second, how do neural circuits process behaviorally-relevant information? Non-linear system analysis has proven instrumental in understanding the coding strategies of early neural processing in various sensory modalities. Yet, at higher levels of integration, it fails to help in deciphering the response of assemblies of neurons to complex naturalistic stimuli. If neural activity can be assumed to be primarily driven by the stimulus at early stages of processing, the intrinsic activity of neural circuits interacts with their high-dimensional input to transform it in a stochastic non-linear fashion at the cortical level. As a consequence, any attempt to fully understand the brain through a system analysis approach becomes illusory. However, it is increasingly advocated that neural noise plays a constructive role in neural processing, facilitating information transmission. This prompts to gain insight into the neural code by studying the stochasticity of neuronal activity, which is viewed as biologically relevant. Such an endeavor requires the design of guiding theoretical principles to assess the potential benefits of neural noise. In this context, meeting the requirements of biological relevance and computational tractability, while providing a stochastic description of neural activity, prescribes the adoption of the integrate-and-fire model. In this thesis, founding ourselves on the path-wise description of neuronal activity, we propose to further the stochastic analysis of the integrate-and fire model through a combination of numerical and theoretical techniques. To begin, we expand upon the path-wise construction of linear diffusions, which offers a natural setting to describe leaky integrate-and-fire neurons, as inhomogeneous Markov chains. Based on the theoretical analysis of the first-passage problem, we then explore the interplay between the internal neuronal noise and the statistics of injected perturbations at the single unit level, and examine its implications on the neural coding. At the population level, we also develop an exact event-driven implementation of a Markov network of perfect integrate-and-fire neurons with both time delayed instantaneous interactions and arbitrary topology. We hope our approach will provide new paradigms to understand how sensory inputs perturb neural intrinsic activity and accomplish the goal of developing a new technique for identifying relevant patterns of population activity. From a perturbative perspective, our study shows how injecting frozen noise in different flavors can help characterize internal neuronal noise, which is presumably functionally relevant to information processing. From a simulation perspective, our event-driven framework is amenable to scrutinize the stochastic behavior of simple recurrent motifs as well as temporal dynamics of large scale networks under spike-timing-dependent plasticity

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF
    • …
    corecore