259,259 research outputs found

    SPIDA: Abstracting and generalizing layout design cases

    Get PDF
    Abstraction and generalization of layout design cases generate new knowledge that is more widely applicable to use than specific design cases. The abstraction and generalization of design cases into hierarchical levels of abstractions provide the designer with the flexibility to apply any level of abstract and generalized knowledge for a new layout design problem. Existing case-based layout learning (CBLL) systems abstract and generalize cases into single levels of abstractions, but not into a hierarchy. In this paper, we propose a new approach, termed customized viewpoint - spatial (CV-S), which supports the generalization and abstraction of spatial layouts into hierarchies along with a supporting system, SPIDA (SPatial Intelligent Design Assistant)

    Balancing generalization and lexical conservatism : an artificial language study with child learners

    Get PDF
    Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. Such learning has been considered problematic for theories of acquisition: if learners generalize abstract patterns to new words, how do they learn lexically-based exceptions? One approach claims that learners use distributional statistics to make inferences about when generalization is appropriate, a hypothesis which has recently received support from Artificial Language Learning experiments with adult learners (Wonnacott, Newport, & Tanenhaus, 2008). Since adult and child language learning may be different (Hudson Kam & Newport, 2005), it is essential to extend these results to child learners. In the current work, four groups of children (6 years) were each exposed to one of four semi-artificial languages. The results demonstrate that children are sensitive to linguistic distributions at and above the level of particular lexical items, and that these statistics influence the balance between generalization and lexical conservatism. The data are in line with an approach which models generalization as rational inference and in particular with the predictions of the domain general hierarchical Bayesian model developed in Kemp, Perfors & Tenenbaum, 2006. This suggests that such models have relevance for theories of language acquisition

    Revisiting peak shift on an artificial dimension: Effects of stimulus variability on generalization

    Get PDF
    This is the author accepted manuscript. The final version is available from SAGE Publications via the DOI in this record.One of Mackintosh’s many contributions to the comparative psychology of associative learning was in developing the distinction between the mental processes responsible for learning about features and learning about relations. His research on discrimination learning and generalization served to highlight differences and commonalities in learning mechanisms across species and paradigms. In one such example, Wills and Mackintosh (1998) trained both pigeons and humans to discriminate between two categories of complex patterns comprising overlapping sets of abstract visual features. They demonstrated that pigeons and humans produced similar “peakshifted” generalization gradients when the proportion of shared features was systemically varied across a set of transfer stimuli, providing support for an elemental feature-based analysis of discrimination and generalization. Here we report a series of experiments inspired by this work, investigating the processes involved in post-discrimination generalization in human category learning. We investigate how post-discrimination generalization is affected by variability in the spatial arrangement and probability of occurrence of the visual features, and develop an associative learning model that builds on Mackintosh’s theoretical approach to elemental associative learning

    Logic-Based Analogical Reasoning and Learning

    Full text link
    Analogy-making is at the core of human intelligence and creativity with applications to such diverse tasks as commonsense reasoning, learning, language acquisition, and story telling. This paper contributes to the foundations of artificial general intelligence by developing an abstract algebraic framework for logic-based analogical reasoning and learning in the setting of logic programming. The main idea is to define analogy in terms of modularity and to derive abstract forms of concrete programs from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous programs. To this end, we introduce algebraic operations for syntactic program composition and concatenation and illustrate, by giving numerous examples, that programs have nice decompositions. Moreover, we show how composition gives rise to a qualitative notion of syntactic program similarity. We then argue that reasoning and learning by analogy is the task of solving analogical proportions between logic programs. Interestingly, our work suggests a close relationship between modularity, generalization, and analogy which we believe should be explored further in the future. In a broader sense, this paper is a first step towards an algebraic and mainly syntactic theory of logic-based analogical reasoning and learning in knowledge representation and reasoning systems, with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity

    Constraining generalisation in artificial language learning : children are rational too

    Get PDF
    Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. Examples occur throughout language. For example, English native speakers know that certain noun-adjective combinations are impermissible (e.g. strong winds, high winds, strong breezes, *high breezes). Another example is the restrictions imposed by verb subcategorization, (e.g. I gave/sent/threw the ball to him; I gave/sent/threw him the ball; donated/carried/pushed the ball to him; * I donated/carried/pushed him the ball). Such lexical exceptions have been considered problematic for acquisition: if learners generalize abstract patterns to new words, how do they learn that certain specific combinations are restricted? (Baker, 1979). Certain researchers have proposed domain-specific procedures (e.g. Pinker, 1989 resolves verb subcategorization in terms of subtle semantic distinctions). An alternative approach is that learners are sensitive to distributional statistics and use this information to make inferences about when generalization is appropriate (Braine, 1971). A series of Artificial Language Learning experiments have demonstrated that adult learners can utilize statistical information in a rational manner when determining constraints on verb argument-structure generalization (Wonnacott, Newport & Tanenhaus, 2008). The current work extends these findings to children in a different linguistic domain (learning relationships between nouns and particles). We also demonstrate computationally that these results are consistent with the predictions of domain-general hierarchical Bayesian model (cf. Kemp, Perfors & Tenebaum, 2007)
    • 

    corecore