23 research outputs found

    From simple to complex categories: how structure and label information guides the acquisition of category knowledge

    Get PDF
    Categorization is a fundamental ability of human cognition, translating complex streams of information from the all of different senses into simpler, discrete categories. How do people acquire all of this category knowledge, particularly the kinds of rich, structured categories we interact with every day in the real-world? In this thesis, I explore how information from category structure and category labels influence how people learn categories, particular for the kinds of computational problems that are relevant to real-world category learning. The three learning problems this thesis covers are: semi-supervised learning, structure learning and category learning with many features. Each of these three learning problems presents a different kinds of learning challenge, and through a combination of behavioural experiments and computational modeling, this thesis illustrates how the interplay between structure and label information can explain how humans can acquire richer kinds of category knowledge.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Psychology, 201

    Learning word-referent mappings and concepts from raw inputs

    Get PDF
    How do children learn correspondences between the language and the world from noisy, ambiguous, naturalistic input? One hypothesis is via cross-situational learning: tracking words and their possible referents across multiple situations allows learners to disambiguate correct word-referent mappings (Yu & Smith, 2007). However, previous models of cross-situational word learning operate on highly simplified representations, side-stepping two important aspects of the actual learning problem. First, how can word-referent mappings be learned from raw inputs such as images? Second, how can these learned mappings generalize to novel instances of a known word? In this paper, we present a neural network model trained from scratch via self-supervision that takes in raw images and words as inputs, and show that it can learn word-referent mappings from fully ambiguous scenes and utterances through cross-situational learning. In addition, the model generalizes to novel word instances, locates referents of words in a scene, and shows a preference for mutual exclusivity

    Fast and flexible: Human program induction in abstract reasoning tasks

    Full text link
    The Abstraction and Reasoning Corpus (ARC) is a challenging program induction dataset that was recently proposed by Chollet (2019). Here, we report the first set of results collected from a behavioral study of humans solving a subset of tasks from ARC (40 out of 1000). Although this subset of tasks contains considerable variation, our results showed that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 80% of tasks solved per participant, and with 65% of tasks being solved by more than 80% of participants. Additionally, we find interesting patterns of behavioral consistency and variability within the action sequences during the generation process, the natural language descriptions to describe the transformations for each task, and the errors people made. Our findings suggest that people can quickly and reliably determine the relevant features and properties of a task to compose a correct solution. Future modeling work could incorporate these findings, potentially by connecting the natural language descriptions we collected here to the underlying semantics of ARC.Comment: 7 pages, 7 figures, 1 tabl

    Learning time-varying categories

    Get PDF
    Many kinds of objects and events in our world have a strongly time-dependent quality. However, most theories about concepts and categories either are insensitive to variation over time or treat it as a nuisance factor that produces irrational order effects during learning. In this article, we present two category learning experiments in which we explored peoples’ ability to learn categories whose structure is strongly time-dependent. We suggest that order effects in categorization may in part reflect a sensitivity to changing environments, and that understanding dynamically changing concepts is an important part of developing a full account of human categorization.Daniel J. Navarro, Amy Perfors, Wai Keen Von

    Evolutionary Computation, Optimization and Learning Algorithms for Data Science

    Get PDF
    A large number of engineering, science and computational problems have yet to be solved in a computationally efficient way. One of the emerging challenges is how evolving technologies grow towards autonomy and intelligent decision making. This leads to collection of large amounts of data from various sensing and measurement technologies, e.g., cameras, smart phones, health sensors, smart electricity meters, and environment sensors. Hence, it is imperative to develop efficient algorithms for generation, analysis, classification, and illustration of data. Meanwhile, data is structured purposefully through different representations, such as large-scale networks and graphs. We focus on data science as a crucial area, specifically focusing on a curse of dimensionality (CoD) which is due to the large amount of generated/sensed/collected data. This motivates researchers to think about optimization and to apply nature-inspired algorithms, such as evolutionary algorithms (EAs) to solve optimization problems. Although these algorithms look un-deterministic, they are robust enough to reach an optimal solution. Researchers do not adopt evolutionary algorithms unless they face a problem which is suffering from placement in local optimal solution, rather than global optimal solution. In this chapter, we first develop a clear and formal definition of the CoD problem, next we focus on feature extraction techniques and categories, then we provide a general overview of meta-heuristic algorithms, its terminology, and desirable properties of evolutionary algorithms

    Cross-situational word learning with multimodal neural networks

    No full text
    corecore