497,290 research outputs found

    Cooperative Learning Model toward a Reading Comprehensions on the Elementary School

    Get PDF
    The purposes of this research are: (1) description of reading skill the students who join in CIRC learning model, Jigsaw learning model, and STAD learning model; (2) finding out the effective of learning model cooperative toward a reading comprehensions between the students who have high language logic and low language logic; and (3) finding out the interaction between the use of the learning model and the language logic in influencing the reading comprehensions. The try out group was given a special treatment of respectively cooperative learning model : CIRC, Jigsaw, and STAD. The try out group was divided into two categories, students who had high language logic and those who had low language logic. The population of the study was the fifth grade elementary school student in Central Java. Students taken by stratified random sampling technique. After the data was collected, they were presented in form of tables and graphs, which were then analyzed with variant analysis. There are three primary in the study. First, the reading skill of the students who joined in CIRC learning model is better than those who joined in Jigsaw or STAD model. Second, the reading skill of the students who have high language logic is better than low language logic.  Third, there are interactions between the use of learning model and the language logic in influencing reading comprehensions. Key Word : cooperative learning model, reading comprehension, language logic

    Logic-Based Analogical Reasoning and Learning

    Full text link
    Analogy-making is at the core of human intelligence and creativity with applications to such diverse tasks as commonsense reasoning, learning, language acquisition, and story telling. This paper contributes to the foundations of artificial general intelligence by developing an abstract algebraic framework for logic-based analogical reasoning and learning in the setting of logic programming. The main idea is to define analogy in terms of modularity and to derive abstract forms of concrete programs from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous programs. To this end, we introduce algebraic operations for syntactic program composition and concatenation and illustrate, by giving numerous examples, that programs have nice decompositions. Moreover, we show how composition gives rise to a qualitative notion of syntactic program similarity. We then argue that reasoning and learning by analogy is the task of solving analogical proportions between logic programs. Interestingly, our work suggests a close relationship between modularity, generalization, and analogy which we believe should be explored further in the future. In a broader sense, this paper is a first step towards an algebraic and mainly syntactic theory of logic-based analogical reasoning and learning in knowledge representation and reasoning systems, with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity

    CHR(PRISM)-based Probabilistic Logic Learning

    Full text link
    PRISM is an extension of Prolog with probabilistic predicates and built-in support for expectation-maximization learning. Constraint Handling Rules (CHR) is a high-level programming language based on multi-headed multiset rewrite rules. In this paper, we introduce a new probabilistic logic formalism, called CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level rapid prototyping of complex statistical models by means of "chance rules". The underlying PRISM system can then be used for several probabilistic inference tasks, including probability computation and parameter learning. We define the CHRiSM language in terms of syntax and operational semantics, and illustrate it with examples. We define the notion of ambiguous programs and define a distribution semantics for unambiguous programs. Next, we describe an implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between CHRiSM and other probabilistic logic programming languages, in particular PCHR. Finally we identify potential application domains

    Back to the Future: Logic and Machine Learning

    Get PDF
    In this paper we argue that since the beginning of the natural language processing or computational linguistics there has been a strong connection between logic and machine learning. First of all, there is something logical about language or linguistic about logic. Secondly, we argue that rather than distinguishing between logic and machine learning, a more useful distinction is between top-down approaches and data-driven approaches. Examining some recent approaches in deep learning we argue that they incorporate both properties and this is the reason for their very successful adoption to solve several problems within language technology

    Expectation Maximization in Deep Probabilistic Logic Programming

    Get PDF
    Probabilistic Logic Programming (PLP) combines logic and probability for representing and reasoning over domains with uncertainty. Hierarchical probability Logic Programming (HPLP) is a recent language of PLP whose clauses are hierarchically organized forming a deep neural network or arithmetic circuit. Inference in HPLP is done by circuit evaluation and learning is therefore cheaper than any generic PLP language. We present in this paper an Expectation Maximization algorithm, called Expectation Maximization Parameter learning for HIerarchical Probabilistic Logic programs (EMPHIL), for learning HPLP parameters. The algorithm converts an arithmetic circuit into a Bayesian network and performs the belief propagation algorithm over the corresponding factor graph
    • …
    corecore