66,503 research outputs found

    Mapping therapy for sentence production impairments in nonfluent aphasia

    Get PDF
    This study investigated a new treatment in which sentence production abilities were trained in a small group of individuals and nonfluent aphasia. It was based upon a mapping therapy approach which holds that sentence production and comprehension impairments are due to difficulties in mapping between the meaning form (thematic roles) and the syntactic form of sentences. We trained production of both canonical and noncanonical reversible sentences.Three patients received treatment and two served as control participants. Patients who received treatment demonstrated acquisition of all trained sentence structures. They also demonstrated across-task generalisation of treated and some untreated sentence structures on two tasks of constrained sentence production, and showed some improvements on a narrative task. One control participant improved on some of these measures and the other did not. There was no noted improvement in sentence comprehension abilities following treatment. Results are discussed with reference to the heterogeneity of underlying impairments in sentence production impairments in nonfluent patients, and the possible mechanisms by which improvement in sentence production might have been achieved in treatment

    The Geometry of Stimulus Control

    Get PDF
    Many studies, both in ethology and comparative psychology, have shown that animals react to modifications of familiar stimuli. This phenomenon is often referred to as generalisation. Most modifications lead to a decrease in responding, but to certain new stimuli an increase in responding is observed. This holds for both innate and learned behaviour. Here we propose a heuristic approach to stimulus control, or stimulus selection, with the aim of explaining these phenomena. The model has two key elements. First, we choose the receptor level as the fundamental stimulus space. Each stimulus is represented as the pattern of activation it induces in sense organs. Second, in this space we introduce a simple measure of `similarity' between stimuli by calculating how activation patterns overlap. The main advantage we recognise in this approach is that the generalisation of acquired responses emerges from a few simple principles which are grounded in the recognition of how animals actually perceive stimuli. Many traditional problems that face theories of stimulus control (e.g. the Spence-Hull theory of gradient interaction or ethological theories of stimulus summation) do not arise in the present framework. These problems include the amount of generalisation along different dimensions, peak-shift phenomena (with respect to both positive and negative shifts), intensity generalisation, and generalisation after conditioning on two positive stimuli

    Block-Diagonal Solutions to Lyapunov Inequalities and Generalisations of Diagonal Dominance

    Full text link
    Diagonally dominant matrices have many applications in systems and control theory. Linear dynamical systems with scaled diagonally dominant drift matrices, which include stable positive systems, allow for scalable stability analysis. For example, it is known that Lyapunov inequalities for this class of systems admit diagonal solutions. In this paper, we present an extension of scaled diagonally dominance to block partitioned matrices. We show that our definition describes matrices admitting block-diagonal solutions to Lyapunov inequalities and that these solutions can be computed using linear algebraic tools. We also show how in some cases the Lyapunov inequalities can be decoupled into a set of lower dimensional linear matrix inequalities, thus leading to improved scalability. We conclude by illustrating some advantages and limitations of our results with numerical examples.Comment: 6 pages, to appear in Proceedings of the Conference on Decision and Control 201

    Dissimilarity is used as evidence of category membership in multidimensional perceptual categorization: a test of the similarity-dissimilarity generalized context model

    Get PDF
    In exemplar models of categorization, the similarity between an exemplar and category members constitutes evidence that the exemplar belongs to the category. We test the possibility that the dissimilarity to members of competing categories also contributes to this evidence. Data were collected from two 2-dimensional perceptual categorization experiments, one with lines varying in orientation and length and the other with coloured patches varying in saturation and brightness. Model fits of the similarity-dissimilarity generalized context model were used to compare a model where only similarity was used with a model where both similarity and dissimilarity were used. For the majority of participants the similarity-dissimilarity model provided both a significantly better fit and better generalization, suggesting that people do also use dissimilarity as evidence

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Non-adjacent dependency learning in infancy, and its link to language development

    Get PDF
    To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants’ capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants’ statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants’ vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants’ segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size

    Constrained set-up of the tGAP structure for progressive vector data transfer

    Get PDF
    A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached
    corecore