4 research outputs found
Fast and flexible: Human program induction in abstract reasoning tasks
The Abstraction and Reasoning Corpus (ARC) is a challenging program induction
dataset that was recently proposed by Chollet (2019). Here, we report the first
set of results collected from a behavioral study of humans solving a subset of
tasks from ARC (40 out of 1000). Although this subset of tasks contains
considerable variation, our results showed that humans were able to infer the
underlying program and generate the correct test output for a novel test input
example, with an average of 80% of tasks solved per participant, and with 65%
of tasks being solved by more than 80% of participants. Additionally, we find
interesting patterns of behavioral consistency and variability within the
action sequences during the generation process, the natural language
descriptions to describe the transformations for each task, and the errors
people made. Our findings suggest that people can quickly and reliably
determine the relevant features and properties of a task to compose a correct
solution. Future modeling work could incorporate these findings, potentially by
connecting the natural language descriptions we collected here to the
underlying semantics of ARC.Comment: 7 pages, 7 figures, 1 tabl
Learning list concepts through program induction
Humans master complex systems of interrelated concepts like mathematics and natural language. Previous work suggests learning these systems relies on iteratively and directly revising a language-like conceptual representation. We introduce and assess a novel concept learning paradigm called Martha's Magical Machines that captures complex relationships between concepts. We model human concept learning in this paradigm as a search in the space of term rewriting systems, previously developed as an abstract model of computation. Our model accurately predicts that participants learn some transformations more easily than others and that they learn harder concepts more easily using a bootstrapping curriculum focused on their compositional parts. Our results suggest that term rewriting systems may be a useful model of human conceptual representations
Learning list concepts through program induction
Humans master complex systems of interrelated concepts likemathematics and natural language. Previous work suggestslearning these systems relies on iteratively and directly re-vising a language-like conceptual representation. We intro-duce and assess a novel concept learning paradigm calledMartha’s Magical Machines that captures complex relation-ships between concepts. We model human concept learning inthis paradigm as a search in the space of term rewriting sys-tems, previously developed as an abstract model of compu-tation. Our model accurately predicts that participants learnsome transformations more easily than others and that theylearn harder concepts more easily using a bootstrapping cur-riculum focused on their compositional parts. Our results sug-gest that term rewriting systems may be a useful model of hu-man conceptual representations