4 research outputs found

    A symbolic-connectionist model of relation learning and visual reasoning

    Get PDF
    Humans regularly reason from visual information, engaging in simple object search in a scene to abstract mathematical thinking. In recent decades, the field of machine learning has extensively focused on visual tasks with the aim to model human visual reasoning. However, machine learning approaches still do not match human performance on simple visual tasks such as the Synthetic Visual Reasoning Test (SVRT; Fleuret et al. 2011). While this set of tasks is trivial for humans to solve, the current state-of-the-art machine learning algorithms struggle with the SVRT. We argue that the reason for the difference in human reasoning and machines’ performance in the SVRT is the ways humans and machines represent the world and visual information specifically. We argue that humans represent situations in terms of relations between constituent objects, and that our representation of these relations is structured and symbolic. By consequence, humans engage in operations that are not available for machine systems that rely on non-structured representations. We hold that operations over structured relational representations is what underlie phenomena such as abstract visual reasoning and cross-domain generalisation. The current work builds on the DORA (Discovery Of Relations by Analogy; Doumas et al., 2008; 2022) model of relation learning. DORA learns structured representations of magnitude relations from simple visual inputs. Here we expand the model to learn more complex categorical relations (e.g., contains or supports) as compressions of simpler relations (e.g., above, in-contact), and develop a new method for identifying relevant relations over which to perform reasoning from simple scenes. We embed the resulting model in a pipeline for human visual reasoning consisting of successful psychological models of object recognition and analogy making. The result is an end-to-end system which is constrained as much as possible by what is known about the processes and mechanisms of the cognitive system–from early vision to learning complex relations and reasoning. The model is tested within the context of the SVRT. The limitations of the model and the directions for future research are discussed

    Making probabilistic relational categories learnable

    Get PDF
    Kittur, Hummel and Holyoak (2004) showed that people have great difficulty learning relation-based categories with a probabilistic (i.e., family resemblance) structure. In Experiment 1, we investigated interventions hypothesized to facilitate learning family-resemblance relational categories. Changing the description of the task from learning about categories to choosing the “winning” object in each stimulus had the greatest impact on subjects’ ability to learn probabilistic relation-based categories. Experiment 2 tested two hypotheses about how the “who’s winning” task works. The results are consistent with the hypothesis that the task invokes a “winning” schema that encourages learners to discover a higher-order relation that remains invariant over members of a category. Experiment 3 reinforced and further clarified the nature of this effect. Together, our findings suggest that people learn relational concepts by a process of intersection discovery akin to schema induction, and that any task that encourages people to discover a higher-order relation that remains invariant over members of a category will facilitate the learning of putatively probabilistic relational concepts
    corecore