2,034 research outputs found

    Probabilistic inference in SWI-Prolog

    Get PDF
    Probabilistic Logic Programming (PLP) emerged as one of the most prominent approaches to cope with real-world domains. The distribution semantics is one of most used in PLP, as it is followed by many languages, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs) and ProbLog. A possible system that allows performing inference on LPADs is PITA, which transforms the input LPAD into a Prolog program containing calls to library predicates for handling Binary Decision Diagrams (BDDs). In particular, BDDs are used to compactly encode explanations for goals and efficiently compute their probability. However, PITA needs mode-directed tabling (also called tabling with answer subsumption), which has been implemented in SWI-Prolog only recently. This paper shows how SWI-Prolog has been extended to include correct answer subsumption and how the PITA transformation has been changed to use SWI-Prolog implementation

    The machine learning in the prediction of elections

    Get PDF
    Resúmen: Este artículo de investigación presenta el análisis y comparación de tres algoritmos diferentes: A.- método de agrupamiento K-media, B.- expectativa de criterios de convergencia y C.- metodología de clasificación LAMDA usando dos softwares de clasificación, Weka y SALSA, como auxiliares para la predicción de las futuras elecciones en el estado de Quintana Roo. Cuando se trabaja con datos electorales, éstos son clasificados en forma cualitativa y cuantitativa, de tal virtud que al final de este artículo tendrá los elementos necesarios para decidir que software tiene un mejor desempeño para el aprendizaje de dicha clasificación. La principal razón para hacer este trabajo es demostrar la eficiencia de los algoritmos con diferentes tipos de datos. Al final se podrá decidir sobre el algoritmo con mejor desempeño para el manejo de información. Palabras clave: aprendizaje automático, lógica fuzzy, agrupamiento, Weka, SALSA, LAMDA, elecciones estatales, predicción

    a history of probabilistic inductive logic programming

    Get PDF
    The field of Probabilistic Logic Programming (PLP) has seen significant advances in the last 20 years, with many proposals for languages that combine probability with logic programming. Since the start, the problem of learning probabilistic logic programs has been the focus of much attention. Learning these programs represents a whole subfield of Inductive Logic Programming (ILP). In Probabilistic ILP (PILP), two problems are considered: learning the parameters of a program given the structure (the rules) and learning both the structure and the parameters. Usually, structure learning systems use parameter learning as a subroutine. In this article, we present an overview of PILP and discuss the main results

    MetaReg: a platform for modeling, analysis and visualization of biological systems using large-scale experimental data

    Get PDF
    A new computational tool is presented that allows the integration of high-throughput experimental results with the probabilistic modeling of previously obtained information about cellular systems. The tool (MetaReg) is demonstrated on the leucine biosynthesis system in S.cerevisiae

    Learning Tuple Probabilities in Probabilistic Databases

    No full text
    Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from query answers, the latter of which are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting of a Boolean lineage formula and a marginal probability that comes attached to the corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, devise two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent) for solving these objectives. Finally, we conclude this work by an experimental evaluation on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning in information extraction, and optimization

    Computational intelligent methods for trusting in social networks

    Get PDF
    104 p.This Thesis covers three research lines of Social Networks. The first proposed reseach line is related with Trust. Different ways of feature extraction are proposed for Trust Prediction comparing results with classic methods. The problem of bad balanced datasets is covered in this work. The second proposed reseach line is related with Recommendation Systems. Two experiments are proposed in this work. The first experiment is about recipe generation with a bread machine. The second experiment is about product generation based on rating given by users. The third research line is related with Influence Maximization. In this work a new heuristic method is proposed to give the minimal set of nodes that maximizes the influence of the network
    corecore