191,686 research outputs found

    Cost-sensitive Bayesian network learning using sampling

    Get PDF
    A significant advance in recent years has been the development of cost-sensitive decision tree learners, recognising that real world classification problems need to take account of costs of misclassification and not just focus on accuracy. The literature contains well over 50 cost-sensitive decision tree induction algorithms, each with varying performance profiles. Obtaining good Bayesian networks can be challenging and hence several algorithms have been proposed for learning their structure and parameters from data. However, most of these algorithms focus on learning Bayesian networks that aim to maximise the accuracy of classifications. Hence an obvious question that arises is whether it is possible to develop cost-sensitive Bayesian networks and whether they would perform better than cost-sensitive decision trees for minimising classification cost? This paper explores this question by developing a new Bayesian network learning algorithm based on changing the data distribution to reflect the costs of misclassification. The proposed method is explored by conducting experiments on over 20 data sets. The results show that this approach produces good results in comparison to more complex cost-sensitive decision tree algorithms

    Learning Logistic Circuits

    Full text link
    This paper proposes a new classification model called logistic circuits. On MNIST and Fashion datasets, our learning algorithm outperforms neural networks that have an order of magnitude more parameters. Yet, logistic circuits have a distinct origin in symbolic AI, forming a discriminative counterpart to probabilistic-logical circuits such as ACs, SPNs, and PSDDs. We show that parameter learning for logistic circuits is convex optimization, and that a simple local search algorithm can induce strong model structures from data.Comment: Published in the Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI19

    On Lie induction and the exceptional series

    Get PDF
    Lie bialgebras occur as the principal objects in the infinitesimalization of the theory of quantum groups ā€” the semi-classical theory. Their relationship with the quantum theory has made available some new tools that we can apply to classical questions. In this paper, we study the simple complex Lie algebras using the double-bosonization construction of Majid. This construction expresses algebraically the induction process given by adding and removing nodes in Dynkin diagrams, which we call Lie induction. We first analyze the deletion of nodes, corresponding to the restriction of adjoint representations to subalgebras. This uses a natural grading associated to each node. We give explicit calculations of the module and algebra structures in the case of the deletion of a single node from the Dynkin diagram for a simple Lie (bi-)algebra. We next consider the inverse process, namely that of adding nodes, and give some necessary conditions for the simplicity of the induced algebra. Finally, we apply these to the exceptional series of simple Lie algebras, in the context of finding obstructions to the existence of finite-dimensional simple complex algebras of types E9, F5 and G3. In particular, our methods give a new point of view on why there cannot exist such an algebra of type E9

    The problem of induction and Artificial Intelligence

    Get PDF
    • ā€¦
    corecore