343,781 research outputs found

    Zero-shot visual reasoning through probabilistic analogical mapping

    Full text link
    Human reasoning is grounded in an ability to identify highly abstract commonalities governing superficially dissimilar visual inputs. Recent efforts to develop algorithms with this capacity have largely focused on approaches that require extensive direct training on visual reasoning tasks, and yield limited generalization to problems with novel content. In contrast, a long tradition of research in cognitive science has focused on elucidating the computational principles underlying human analogical reasoning; however, this work has generally relied on manually constructed representations. Here we present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes these two approaches. VisiPAM employs learned representations derived directly from naturalistic visual inputs, coupled with a similarity-based mapping operation derived from cognitive theories of human reasoning. We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task. In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories

    From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models

    Full text link
    Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps. Yet, complex logical reasoning is still cumbersome for language models. Based on the dual process theory in cognitive science, we are the first to unravel the cognitive reasoning abilities of language models. Our framework employs an iterative methodology to construct a Cognitive Tree (CogTree). The root node of this tree represents the initial query, while the leaf nodes consist of straightforward questions that can be answered directly. This construction involves two main components: the implicit extraction module (referred to as the intuitive system) and the explicit reasoning module (referred to as the reflective system). The intuitive system rapidly generates multiple responses by utilizing in-context examples, while the reflective system scores these responses using comparative learning. The scores guide the intuitive system in its subsequent generation step. Our experimental results on two popular and challenging reasoning tasks indicate that it is possible to achieve a performance level comparable to that of GPT-3.5 (with 175B parameters), using a significantly smaller language model that contains fewer parameters (<=7B) than 5% of GPT-3.5.Comment: emnlp 202

    ANALISIS METODE PEMBELAJARAN STEAM DALAM PERKEMBANGAN KOGNITIF ANAK USIA DINI

    Get PDF
    Early age is an important period for children to develop, especially in children's cognitive development. Cognitive processes are closely related to the level of intelligence which refers to thinking, knowledge, and reasoning where this cognitive concerns the development of children's thinking. To improve aspects of cognitive development at an early age, one of them is by using the STEAM learning model. STEAM is a learning approach that is based on the relationship of knowledge and skills of science, technology, engineering, art, and mathematics (STEAM) to solve problems. Therefore, the purpose of this study is to analyze the STEAM learning model in early childhood cognitive development. This study uses qualitative methods and data collection techniques using the library method by collecting several journals and articles related to STEAM learning

    ANALISIS METODE PEMBELAJARAN STEAM DALAM PERKEMBANGAN KOGNITIF ANAK USIA DINI

    Get PDF
    Early age is an important period for children to develop, especially in children's cognitive development. Cognitive processes are closely related to the level of intelligence which refers to thinking, knowledge, and reasoning where this cognitive concerns the development of children's thinking. To improve aspects of cognitive development at an early age, one of them is by using the STEAM learning model. STEAM is a learning approach that is based on the relationship of knowledge and skills of science, technology, engineering, art, and mathematics (STEAM) to solve problems. Therefore, the purpose of this study is to analyze the STEAM learning model in early childhood cognitive development. This study uses qualitative methods and data collection techniques using the library method by collecting several journals and articles related to STEAM learning

    In-Context Analogical Reasoning with Pre-Trained Language Models

    Full text link
    Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs' analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks

    Exploring the Impact of Project-Based Learning and Discovery Learning to The Students’ Learning Outcomes: Reviewed from The Analytical Skills

    Get PDF
    The purposes of the research were to know the difference between the student cognitive achievement who learned using PjBL and Discovery Learning models, between the student who had the high and low analyzing ability, and their interaction. The research population included the seventh-grade students in one of the Islamic state schools in Surakarta. The research subjects were students with different knowledge capabilities from low to high levels. The method has been implemented was experimental research. A two-way Anava test was chosen for the technique of analyzing data in this research. In collecting data, the multiple-choice test was used based on aspects of analytical abilities, namely mental flexibility, verbal reasoning and reading comprehension, scientific and mechanical reasoning. The result showed that there is the effect of the application of PjBL and Discovery learning model on cognitive achievement with the significance value 0,05, there is the effect between high and low analyzing ability on cognitive achievement with the significance value 0,05 and there was no interaction between learning model and analyzing ability on cognitive achievement with the significance value 0,05. This study implies that the PjBL model and discovery have a significant impact on student learning outcomes so that they can be used for other science subjects by paying attention to the internal factors of students that will be used as a revie

    Measuring Semantic Similarity by Latent Relational Analysis

    Get PDF
    This paper introduces Latent Relational Analysis (LRA), a method for measuring semantic similarity. LRA measures similarity in the semantic relations between two pairs of words. When two pairs have a high degree of relational similarity, they are analogous. For example, the pair cat:meow is analogous to the pair dog:bark. There is evidence from cognitive science that relational similarity is fundamental to many cognitive and linguistic tasks (e.g., analogical reasoning). In the Vector Space Model (VSM) approach to measuring relational similarity, the similarity between two pairs is calculated by the cosine of the angle between the vectors that represent the two pairs. The elements in the vectors are based on the frequencies of manually constructed patterns in a large corpus. LRA extends the VSM approach in three ways: (1) patterns are derived automatically from the corpus, (2) Singular Value Decomposition is used to smooth the frequency data, and (3) synonyms are used to reformulate word pairs. This paper describes the LRA algorithm and experimentally compares LRA to VSM on two tasks, answering college-level multiple-choice word analogy questions and classifying semantic relations in noun-modifier expressions. LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions and significantly exceeding VSM performance on both tasks
    • …
    corecore