354,511 research outputs found

    Digital Tools in Language Education: A Case Study on the Integration of Whiteboard.Fi in German Language Classes at SMAN 8 Malang, Indonesia

    Get PDF
    This research aims to describe the utilization of Whiteboard.fi as a supporting tool in German language learning and the responses of 11th-grade students from IPA 5 class at SMAN 8 Malang during the academic year 2022/2023 when using Whiteboard.fi. The research employed a qualitative descriptive method, and data were collected through observation and questionnaire responses. The findings indicate that using Whiteboard.fi as a learning tool in German language classes creates an enjoyable learning experience. Both students and teachers actively engage in the learning process through feedback mechanisms. In their feedback, students expressed positive opinions and reported a lack of boredom during the lessons. The optimal utilization of Whiteboard.fi is achieved when accessed via a PC/laptop, as the more expansive display enhances the ease of note-taking during lessons

    Pengaruh Gaya Belajar Terhadap Peningkatan Hasil Belajar Peserta Didik Materi Teks Biografi

    Get PDF
    The background of this research is that the learning outcomes of students are not optimal, especially for materials that require narrative elaboration such as biographical text material. To overcome this through classroom action research, researchers utilize various learning styles in the hope of increasing student learning outcomes in biographical text material. This research is limited to biographical text material for Indonesian language lessons in Class X-9 at SMA Negeri 1 Menganti. This study used classroom action research methods with data collection techniques by observation, interviews and tests. The target for achieving an increase in learning outcomes is 75% classical completeness score with a minimum mastery of 65. The research implementation took place in 2 (two) cycles with varying changes in learning outcomes. In the first cycle of class X - 9 the learning outcomes achieved were 91.87% of the classical completeness score with an average achievement of 78.81, while in the second cycle the learning outcomes achieved in class X - 9 were 100% of the classical completeness score with an average achievement 86,17. From these results it can be concluded that the application of learning styles can affect the improvement of student learning outcomes in biographical text material. The percentage of positive responses of students carrying out learning by applying various learning styles was 92.96%, while the negative responses were 7.04%, so based on these results that the positive percentage was greater than negative, so students were interested in applying learning styles to improve student learning outcome

    Knowledge Base Population using Semantic Label Propagation

    Get PDF
    A crucial aspect of a knowledge base population system that extracts new facts from text corpora, is the generation of training data for its relation extractors. In this paper, we present a method that maximizes the effectiveness of newly trained relation extractors at a minimal annotation cost. Manual labeling can be significantly reduced by Distant Supervision, which is a method to construct training data automatically by aligning a large text corpus with an existing knowledge base of known facts. For example, all sentences mentioning both 'Barack Obama' and 'US' may serve as positive training instances for the relation born_in(subject,object). However, distant supervision typically results in a highly noisy training set: many training sentences do not really express the intended relation. We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision. We further improve on this approach by introducing the Semantic Label Propagation method, which uses the similarity between low-dimensional representations of candidate training instances, to extend the training set in order to increase recall while maintaining high precision. Our proposed strategy for generating training data is studied and evaluated on an established test collection designed for knowledge base population tasks. The experimental results show that the Semantic Label Propagation strategy leads to substantial performance gains when compared to existing approaches, while requiring an almost negligible manual annotation effort.Comment: Submitted to Knowledge Based Systems, special issue on Knowledge Bases for Natural Language Processin

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Learning a Policy for Opportunistic Active Learning

    Full text link
    Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.Comment: EMNLP 2018 Camera Read
    corecore