768,779 research outputs found

    Computer-Aided Derivation of Multi-scale Models: A Rewriting Framework

    Full text link
    We introduce a framework for computer-aided derivation of multi-scale models. It relies on a combination of an asymptotic method used in the field of partial differential equations with term rewriting techniques coming from computer science. In our approach, a multi-scale model derivation is characterized by the features taken into account in the asymptotic analysis. Its formulation consists in a derivation of a reference model associated to an elementary nominal model, and in a set of transformations to apply to this proof until it takes into account the wanted features. In addition to the reference model proof, the framework includes first order rewriting principles designed for asymptotic model derivations, and second order rewriting principles dedicated to transformations of model derivations. We apply the method to generate a family of homogenized models for second order elliptic equations with periodic coefficients that could be posed in multi-dimensional domains, with possibly multi-domains and/or thin domains.Comment: 26 page

    Domain transfer for deep natural language generation from abstract meaning representations

    Get PDF
    Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%

    The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke

    Get PDF
    Competing theories of short-term memory function make specific predictions about the functional anatomy of auditory short-term memory and its role in language comprehension. We analysed high-resolution structural magnetic resonance images from 210 stroke patients and employed a novel voxel based analysis to test the relationship between auditory short-term memory and speech comprehension. Using digit span as an index of auditory short-term memory capacity we found that the structural integrity of a posterior region of the superior temporal gyrus and sulcus predicted auditory short-term memory capacity, even when performance on a range of other measures was factored out. We show that the integrity of this region also predicts the ability to comprehend spoken sentences. Our results therefore support cognitive models that posit a shared substrate between auditory short-term memory capacity and speech comprehension ability. The method applied here will be particularly useful for modelling structure–function relationships within other complex cognitive domains

    Spontaneous Analogy by Piggybacking on a Perceptual System

    Full text link
    Most computational models of analogy assume they are given a delineated source domain and often a specified target domain. These systems do not address how analogs can be isolated from large domains and spontaneously retrieved from long-term memory, a process we call spontaneous analogy. We present a system that represents relational structures as feature bags. Using this representation, our system leverages perceptual algorithms to automatically create an ontology of relational structures and to efficiently retrieve analogs for new relational structures from long-term memory. We provide a demonstration of our approach that takes a set of unsegmented stories, constructs an ontology of analogical schemas (corresponding to plot devices), and uses this ontology to efficiently find analogs within new stories, yielding significant time-savings over linear analog retrieval at a small accuracy cost.Comment: Proceedings of the 35th Meeting of the Cognitive Science Society, 201

    Backreaction in Cosmological Models

    Get PDF
    Most cosmological models studied today are based on the assumption of homogeneity and isotropy. Observationally one can find evidence that supports these assumptions on very large scales, the strongest being the almost isotropy of the Cosmic Microwave Background radiation after assigning the whole dipole to our proper motion relative to this background. However, on small and on intermediate scales up to several hundreds of Mpcs, there are strong deviations from homogeneity and isotropy. Here the problem arises how to relate the observations with the homogeneous and isotropic models. The usual proposal for solving this problem is to assume that Friedmann-Lemaitre models describe the mean observables. Such mean values may be identified with spatial averages. For Newtonian fluid dynamics the averaging procedure has been discussed in detail in Buchert and Ehlers (1997), leading to an additional backreaction term in the Friedmann equation. We use the Eulerian linear approximation and the `Zel'dovich approximation' to estimate the effect of the backreaction term on the expansion. Our results indicate that even for domains matching the background density in the mean, the evolution of the scale factor strongly deviates from the Friedmann solution, critically depending on the velocity field inside.Comment: 4 pages LaTeX, 2 figures, a4wide.sty include

    Planning as Tabled Logic Programming

    Get PDF
    This paper describes Picat's planner, its implementation, and planning models for several domains used in International Planning Competition (IPC) 2014. Picat's planner is implemented by use of tabling. During search, every state encountered is tabled, and tabled states are used to effectively perform resource-bounded search. In Picat, structured data can be used to avoid enumerating all possible permutations of objects, and term sharing is used to avoid duplication of common state data. This paper presents several modeling techniques through the example models, ranging from designing state representations to facilitate data sharing and symmetry breaking, encoding actions with operations for efficient precondition checking and state updating, to incorporating domain knowledge and heuristics. Broadly, this paper demonstrates the effectiveness of tabled logic programming for planning, and argues the importance of modeling despite recent significant progress in domain-independent PDDL planners.Comment: 27 pages in TPLP 201

    First-passage distributions for the one-dimensional Fokker-Planck equation

    Full text link
    We present an analytical framework to study the first-passage (FP) and first-return (FR) distributions for the broad family of models described by the one-dimensional Fokker-Planck equation in finite domains, identifying general properties of these distributions for different classes of models. When in the Fokker-Planck equation the diffusion coefficient is positive (nonzero) and the drift term is bounded, as in the case of a Brownian walker, both distributions may exhibit a power-law decay with exponent -3/2 for intermediate times. We discuss how the influence of an absorbing state changes this exponent. The absorbing state is characterized by a vanishing diffusion coefficient and/or a diverging drift term. Remarkably, the exponent of the Brownian walker class of models is still found, as long as the departure and arrival regions are far enough from the absorbing state, but the range of times where the power law is observed narrows. Close enough to the absorbing point, though, a new exponent may appear. The particular value of the exponent depends on the behavior of the diffusion and the drift terms of the Fokker-Planck equation. We focus on the case of a diffusion term vanishing linearly at the absorbing point. In this case, the FP and FR distributions are similar to those of the voter model, characterized by a power law with exponent -2. As an illustration of the general theory, we compare it with exact analytical solutions and extensive numerical simulations of a two-parameter voter-like family models. We study the behavior of the FP and FR distributions by tuning the importance of the absorbing points throughout changes of the parameters. Finally, the possibility of inferring relevant information about the steady-sate probability distribution of a model from the FP and FR distributions is addressed.Comment: 17 pages, 8 figure

    On the Semantics of Petri Nets

    No full text
    Petri Place/Transition (PT) nets are one of the most widely used models of concurrency. However, they still lack, in our view, a satisfactory semantics: on the one hand the "token game"' is too intensional, even in its more abstract interpretations in term of nonsequential processes and monoidal categories; on the other hand, Winskel's basic unfolding construction, which provides a coreflection between nets and finitary prime algebraic domains, works only for safe nets. In this paper we extend Winskel's result to PT nets. We start with a rather general category {PTNets} of PT nets, we introduce a category {DecOcc} of decorated (nondeterministic) occurrence nets and we define adjunctions between {PTNets} and {DecOcc} and between {DecOcc} and {Occ}, the category of occurrence nets. The role of {DecOcc} is to provide natural unfoldings for PT nets, i.e. acyclic safe nets where a notion of family is used for relating multiple instances of the same place. The unfolding functor from {PTNets} to {Occ} reduces to Winskel's when restricted to safe nets, while the standard coreflection between {Occ} and {Dom}, the category of finitary prime algebraic domains, when composed with the unfolding functor above, determines a chain of adjunctions between {PTNets} and {Dom}
    corecore