75,407 research outputs found

    HUBUNGAN KEMAMPUAN VERBAL REASONING DENGAN KETERAMPILAN MENJAWAB SOAL CERITA MATEMATIKA PADA SISWA KELAS V SD GUGUS I KECAMATAN SELONG

    Get PDF
    Mathematics is a science that is widely used in everyday life. Almost every aspect of life uses concepts and knowledge from mathematics. So that the concept is poured in the form of story questions that present problems that are often encountered in everyday life which must then be solved with the right steps. In solving the problems presented in story problems, there are several skills that must be possessed, one of which is verbal reasoning ability. This study aims to determine whether there is a relationship between verbal reasoning skills and the ability to answer math story questions in fifth grade elementary school students in cluster I, Selong sub-district. This study uses a quantitative approach to the type of correlational research. The subjects in this study were fifth grade elementary school students in Cluster I, Selong District with a population of 407 students. Samples were taken randomly using a lottery or lottery with cluster random sampling technique so that 80 students were obtained from 3 schools belonging to cluster I, Selong District, namely SDN 2 Selong, SDN 3 Selong and SDN 5 Selong. The research instruments used were multiple choice and essay tests, which were then validated before being used with expert tests (experts). The results of the instrument validation in the multiple choice test are adding the number of tests from 10 questions to 30 questions and in the essay test the validation results are using numbers that vary in question number 1, correcting the sentence redaction on question number 4 and replacing question number 5 with question another. Before testing the hypothesis, you must first test the prerequisites, namely the normality test and the linearity test. The results of the normality test of verbal reasoning abilities obtained a significance value of 0.051, while the results of the normality test of skills in answering math story questions obtained a significance value of 0.180, meaning that the data was normally distributed because > 0.05. Then the results of the linearity test obtained a deviation for linearity value of 0.611, then 0.611 > 0.05, meaning that there is a linear relationship between verbal reasoning abilities and skills in answering math story questions. After the prerequisite test is done, then you can test the hypothesis using the Pearson product moment test using the IBM SPSS Statistic 23 program. Based on the results of data analysis, the rcount value is 0.623 which is in the interval 0.60-0.799 with a strong relationship level with the type of relationship, namely the relationship positive, where if the verbal ability score is high, the higher the skill value for answering math story questions. The results of the correlation test show that the rcount is greater than 0.05 or 0.623 > 0.05 with a sig (2-tailed) value of 0.000 so 0.000 <0.05, which means that there is a relationship between verbal reasoning ability and students' ability to answer math story problems. class V SD group I Selong District. Based on this research, principals and teachers need to get used to the use of students' verbal reasoning skills in order to be able to answer math story problems well

    Visual Entailment: A Novel Task for Fine-Grained Image Understanding

    Get PDF
    Existing visual reasoning datasets such as Visual Question Answering (VQA), often suffer from biases conditioned on the question, image or answer distributions. The recently proposed CLEVR dataset addresses these limitations and requires fine-grained reasoning but the dataset is synthetic and consists of similar objects and sentence structures across the dataset. In this paper, we introduce a new inference task, Visual Entailment (VE) - consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal of a trained VE model is to predict whether the image semantically entails the text. To realize this task, we build a dataset SNLI-VE based on the Stanford Natural Language Inference corpus and Flickr30k dataset. We evaluate various existing VQA baselines and build a model called Explainable Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71% accuracy and outperforms several other state-of-the-art VQA based models. Finally, we demonstrate the explainability of EVE through cross-modal attention visualizations. The SNLI-VE dataset is publicly available at https://github.com/ necla-ml/SNLI-VE

    FiLM: Visual Reasoning with a General Conditioning Layer

    Full text link
    We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.Comment: AAAI 2018. Code available at http://github.com/ethanjperez/film . Extends arXiv:1707.0301

    SE-KGE: A Location-Aware Knowledge Graph Embedding Model for Geographic Question Answering and Spatial Semantic Lifting

    Get PDF
    Learning knowledge graph (KG) embeddings is an emerging technique for a variety of downstream tasks such as summarization, link prediction, information retrieval, and question answering. However, most existing KG embedding models neglect space and, therefore, do not perform well when applied to (geo)spatial data and tasks. For those models that consider space, most of them primarily rely on some notions of distance. These models suffer from higher computational complexity during training while still losing information beyond the relative distance between entities. In this work, we propose a location-aware KG embedding model called SE-KGE. It directly encodes spatial information such as point coordinates or bounding boxes of geographic entities into the KG embedding space. The resulting model is capable of handling different types of spatial reasoning. We also construct a geographic knowledge graph as well as a set of geographic query-answer pairs called DBGeo to evaluate the performance of SE-KGE in comparison to multiple baselines. Evaluation results show that SE-KGE outperforms these baselines on the DBGeo dataset for geographic logic query answering task. This demonstrates the effectiveness of our spatially-explicit model and the importance of considering the scale of different geographic entities. Finally, we introduce a novel downstream task called spatial semantic lifting which links an arbitrary location in the study area to entities in the KG via some relations. Evaluation on DBGeo shows that our model outperforms the baseline by a substantial margin.Comment: Accepted to Transactions in GI

    Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering

    Full text link
    Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.Comment: 9 pages, 3 figures, AAAI 201

    Visual Entailment Task for Visually-Grounded Language Learning

    Get PDF
    We introduce a new inference task - Visual Entailment (VE) - which differs from traditional Textual Entailment (TE) tasks whereby a premise is defined by an image, rather than a natural language sentence as in TE tasks. A novel dataset SNLI-VE (publicly available at https://github.com/necla-ml/SNLI-VE) is proposed for VE tasks based on the Stanford Natural Language Inference corpus and Flickr30k. We introduce a differentiable architecture called the Explainable Visual Entailment model (EVE) to tackle the VE problem. EVE and several other state-of-the-art visual question answering (VQA) based models are evaluated on the SNLI-VE dataset, facilitating grounded language understanding and providing insights on how modern VQA based models perform.Comment: 4 pages, accepted by Visually Grounded Interaction and Language (ViGIL) workshop in NeurIPS 201
    corecore