30 research outputs found

    A concept drift-tolerant case-base editing technique

    Full text link
    © 2015 Elsevier B.V. All rights reserved. The evolving nature and accumulating volume of real-world data inevitably give rise to the so-called "concept drift" issue, causing many deployed Case-Based Reasoning (CBR) systems to require additional maintenance procedures. In Case-base Maintenance (CBM), case-base editing strategies to revise the case-base have proven to be effective instance selection approaches for handling concept drift. Motivated by current issues related to CBR techniques in handling concept drift, we present a two-stage case-base editing technique. In Stage 1, we propose a Noise-Enhanced Fast Context Switch (NEFCS) algorithm, which targets the removal of noise in a dynamic environment, and in Stage 2, we develop an innovative Stepwise Redundancy Removal (SRR) algorithm, which reduces the size of the case-base by eliminating redundancies while preserving the case-base coverage. Experimental evaluations on several public real-world datasets show that our case-base editing technique significantly improves accuracy compared to other case-base editing approaches on concept drift tasks, while preserving its effectiveness on static tasks

    Answer Set Programming for Non-Stationary Markov Decision Processes

    Full text link
    Non-stationary domains, where unforeseen changes happen, present a challenge for agents to find an optimal policy for a sequential decision making problem. This work investigates a solution to this problem that combines Markov Decision Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming (ASP) in a method we call ASP(RL). In this method, Answer Set Programming is used to find the possible trajectories of an MDP, from where Reinforcement Learning is applied to learn the optimal policy of the problem. Results show that ASP(RL) is capable of efficiently finding the optimal solution of an MDP representing non-stationary domains

    Comparing Formal Specification Languages

    Get PDF
    This paper presents a comparison between eight specification languages discussed during the Workshop on Formal Specification Techniques for Complex Reasoning Systems held in Vienna during the ECAI'92 conference. The languages as discussed here possess many important common characteristics, but also differ substantially. The analysis discussed here departs from looking at the purposes of the presented languages. The comparison focuses on the way of dealing with heuristic knowledge in the specification of the common example task. Some differences between the languages are discussed: \begin{itemize} \item expressive power; \item the way of specifying control knowledge; \item layering of the system architecture. \end{itemize} We identify where already a certain consensus can be found; points are discussed that are in common for most of the languages: \begin{itemize} \item modularity; \item local declarativeness; \item multi-level view on a specification; \item distinct specification of static and dynamic aspects; \item separation of generic and domain-specific parts of a specification; \item object-meta distinctions in specifications; \item the level of specification. \end{itemize} Moreover, the major open problems are presented

    Efficient AUC optimization for classification

    No full text
    In this paper we show an efficient method for inducing classifiers that directly optimize the area under the ROC curve. Recently, AUC gained importance in the classification community as a mean to compare the performance of classifiers. Because most classification methods do not optimize this measure directly, several classification learning methods are emerging that directly optimize the AUC. These methods, however, require many costly computations of the AUC, and hence, do not scale well to large datasets. In this paper, we develop a method to increase the efficiency of computing AUC based on a polynomial approximation of the AUC. As a proof of concept, the approximation is plugged into the construction of a scalable linear classifier that directly optimizes AUC using a gradient descent method. Experiments on real-life datasets show a high accuracy and efficiency of the polynomial approximation

    Classification with Belief Decision Trees

    No full text
    corecore