126,470 research outputs found

    Rule induction performance in amnestic mild cognitive impairment and Alzheimer’s dementia: examining the role of simple and biconditional rule learning processes

    Get PDF
    Introduction: Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus–response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer’s dementia (AD). Method: Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition—inducing an increase in complexity and, hence, executive control load (Condition 3). Results: Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. Conclusions: These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients

    Complexity Hierarchies and Higher-order Cons-free Term Rewriting

    Get PDF
    Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K ≄\geq 1, left-linear cons-free systems with type order K characterize EK^KTIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (EK^KTIME versus EXPK−1^{K-1}TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E1^1TIME ⊊\subsetneq E2^2TIME ⊊\subsetneq ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.Comment: extended version of a paper submitted to FSCD 2016. arXiv admin note: substantial text overlap with arXiv:1604.0893

    Pruning methods for rule induction

    Get PDF
    Machine learning is a research area within computer science that is mainly concerned with discovering regularities in data. Rule induction is a powerful technique used in machine learning wherein the target concept is represented as a set of rules. The attraction of rule induction is that rules are more transparent and easier to understand compared to other induction methods (e.g., regression methods or neural network). Rule induction has been shown to outperform other learners on many problems. However, it is not suitable to handle exceptions and noisy data in training sets, which can be solved by pruning. This thesis is concerned with investigating whether preceding rule induction with instance reduction techniques can help in reducing the complexity of rule sets by reducing the number of rules generated without adversely affecting the predictive accuracy. An empirical study is undertaken to investigate the application of three different rule classifiers to datasets that were previously reduced with promising instance-reduction methods. Furthermore, we propose a new instance reduction method based on Ant Colony Optimization (ACO). We evaluate the effectiveness of this instance reduction method for k nearest neighbour algorithms in term of predictive accuracy and amount of reduction. Then we compared it with other instance reduction methods.We show that pruning classification rules with instance-reduction methods lead to a statistically significant decrease in the number of generated rules, without adversely affecting performance. On the other hand, applying instance-reduction methods enhances the predictive accuracy on some datasets. Moreover, the results provide evidence that: (1) our proposed instance reduction method based on ACO is competitive with the well-known k-NN algorithm; (2) the reduced sets computed by our method offers better classification accuracy than those obtained by the compared algorithms

    On the relative proof complexity of deep inference via atomic flows

    Get PDF
    We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.Comment: 27 pages, 2 figures, full version of conference pape

    Worst-case Optimal Query Answering for Greedy Sets of Existential Rules and Their Subclasses

    Full text link
    The need for an ontological layer on top of data, associated with advanced reasoning mechanisms able to exploit the semantics encoded in ontologies, has been acknowledged both in the database and knowledge representation communities. We focus in this paper on the ontological query answering problem, which consists of querying data while taking ontological knowledge into account. More specifically, we establish complexities of the conjunctive query entailment problem for classes of existential rules (also called tuple-generating dependencies, Datalog+/- rules, or forall-exists-rules. Our contribution is twofold. First, we introduce the class of greedy bounded-treewidth sets (gbts) of rules, which covers guarded rules, and their most well-known generalizations. We provide a generic algorithm for query entailment under gbts, which is worst-case optimal for combined complexity with or without bounded predicate arity, as well as for data complexity and query complexity. Secondly, we classify several gbts classes, whose complexity was unknown, with respect to combined complexity (with both unbounded and bounded predicate arity) and data complexity to obtain a comprehensive picture of the complexity of existential rule fragments that are based on diverse guardedness notions. Upper bounds are provided by showing that the proposed algorithm is optimal for all of them

    Preceding rule induction with instance reduction methods

    Get PDF
    A new prepruning technique for rule induction is presented which applies instance reduction before rule induction. An empirical evaluation records the predictive accuracy and size of rule-sets generated from 24 datasets from the UCI Machine Learning Repository. Three instance reduction algorithms (Edited Nearest Neighbour, AllKnn and DROP5) are compared. Each one is used to reduce the size of the training set, prior to inducing a set of rules using Clark and Boswell's modification of CN2. A hybrid instance reduction algorithm (comprised of AllKnn and DROP5) is also tested. For most of the datasets, pruning the training set using ENN, AllKnn or the hybrid significantly reduces the number of rules generated by CN2, without adversely affecting the predictive performance. The hybrid achieves the highest average predictive accuracy

    A Labelled Sequent Calculus for BBI: Proof Theory and Proof Search

    Full text link
    We present a labelled sequent calculus for Boolean BI, a classical variant of O'Hearn and Pym's logic of Bunched Implication. The calculus is simple, sound, complete, and enjoys cut-elimination. We show that all the structural rules in our proof system, including those rules that manipulate labels, can be localised around applications of certain logical rules, thereby localising the handling of these rules in proof search. Based on this, we demonstrate a free variable calculus that deals with the structural rules lazily in a constraint system. A heuristic method to solve the constraints is proposed in the end, with some experimental results
    • 

    corecore