30 research outputs found

    Greedy Algorithm for Inference of Decision Trees from Decision Rule Systems

    Full text link
    Decision trees and decision rule systems play important roles as classifiers, knowledge representation tools, and algorithms. They are easily interpretable models for data analysis, making them widely used and studied in computer science. Understanding the relationships between these two models is an important task in this field. There are well-known methods for converting decision trees into systems of decision rules. In this paper, we consider the inverse transformation problem, which is not so simple. Instead of constructing an entire decision tree, our study focuses on a greedy polynomial time algorithm that simulates the operation of a decision tree on a given tuple of attribute values.Comment: arXiv admin note: substantial text overlap with arXiv:2305.01721, arXiv:2302.0706

    Representing and Reasoning on Conceptual Queries Over Image Databases

    Get PDF
    The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.This is an extended version of the article in: Eleventh International Symposium on Methodologies for Intelligent Systems, Warsaw, Poland, 1999

    Extending Attribute-Oriented Induction as a Key-Preserving Data Mining Method

    Full text link

    Bounds on Depth of Decision Trees Derived from Decision Rule Systems

    Full text link
    Systems of decision rules and decision trees are widely used as a means for knowledge representation, as classifiers, and as algorithms. They are among the most interpretable models for classifying and representing knowledge. The study of relationships between these two models is an important task of computer science. It is easy to transform a decision tree into a decision rule system. The inverse transformation is a more difficult task. In this paper, we study unimprovable upper and lower bounds on the minimum depth of decision trees derived from decision rule systems depending on the various parameters of these systems

    Test: Internet Indexing Systems vs List of Known URLs: Revisited

    Get PDF
    This is a compilation of the tests done in Sept./Oct. 1997 by the author on the hen existing search engines. It was published on the web on the authors home page . As the web pages changed, this was pushed of into the old site and forgotten. The original HTML pages were converted into PDF using LibreOffice in Aug 2022 and is placed in the Spectrum repository for the record

    Learning Logic Programs by Discovering Higher-Order Abstractions

    Full text link
    Discovering novel abstractions is important for human-level AI. We introduce an approach to discover higher-order abstractions, such as map, filter, and fold. We focus on inductive logic programming, which induces logic programs from examples and background knowledge. We introduce the higher-order refactoring problem, where the goal is to compress a logic program by introducing higher-order abstractions. We implement our approach in STEVIE, which formulates the higher-order refactoring problem as a constraint optimisation problem. Our experimental results on multiple domains, including program synthesis and visual reasoning, show that, compared to no refactoring, STEVIE can improve predictive accuracies by 27% and reduce learning times by 47%. We also show that STEVIE can discover abstractions that transfer to different domain

    Logical omniscience as infeasibility

    Full text link

    Murray Ledger and Times, November 3, 2004

    Get PDF

    LOGIC AND CONSTRAINT PROGRAMMING FOR COMPUTATIONAL SUSTAINABILITY

    Get PDF
    Computational Sustainability is an interdisciplinary field that aims to develop computational and mathematical models and methods for decision making concerning the management and allocation of resources in order to help solve environmental problems. This thesis deals with a broad spectrum of such problems (energy efficiency, water management, limiting greenhouse gas emissions and fuel consumption) giving a contribution towards their solution by means of Logic Programming (LP) and Constraint Programming (CP), declarative paradigms from Artificial Intelligence of proven solidity. The problems described in this thesis were proposed by experts of the respective domains and tested on the real data instances they provided. The results are encouraging and show the aptness of the chosen methodologies and approaches. The overall aim of this work is twofold: both to address real world problems in order to achieve practical results and to get, from the application of LP and CP technologies to complex scenarios, feedback and directions useful for their improvement
    corecore