83,698 research outputs found
Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm
The Markov Blanket Bayesian Classifier is a recently-proposed algorithm for
construction of probabilistic classifiers. This paper presents an empirical
comparison of the MBBC algorithm with three other Bayesian classifiers: Naive
Bayes, Tree-Augmented Naive Bayes and a general Bayesian network. All of these
are implemented using the K2 framework of Cooper and Herskovits. The
classifiers are compared in terms of their performance (using simple accuracy
measures and ROC curves) and speed, on a range of standard benchmark data sets.
It is concluded that MBBC is competitive in terms of speed and accuracy with
the other algorithms considered.Comment: 9 pages: Technical Report No. NUIG-IT-011002, Department of
Information Technology, National University of Ireland, Galway (2002
Statistical Classifier Design and Evaluation
This thesis is concerned with the design and evaluation of statistical classifiers. This problem has an optimal solution with a priori knowledge of the underlying probability distributions. Here, we examine the expected performance of parametric classifiers designed from a finite set of training samples and tested under various conditions. By investigating the statistical properties of the performance bias when tested on the true distributions, we have isolated the effects of the individual design components (i.e., the number of training samples, the dimensionality, and the parameters of the underlying distributions). These results have allowed us to establish a firm theoretical foundation for new design guidelines and to develop an empirical approach for estimating the asymptotic performance. Investigation of the statistical properties of the performance bias when tested on finite sample sets has allowed us to pinpoint the effects of individual design samples, the relationship between the sizes of the design and test sets, and the effects of a dependency between these sets. This, in turn, leads to a better understanding of how a single training set can be used most efficiently. In addition, we have developed a theoretical framework for the analysis and comparison of various performance evaluation procedures. Nonparametric and one-class classifiers are also considered. The reduced Parzen classifier, a nonparametric classifier which combines the error estimation capabilities of the Parzen density estimate with the computational feasibility of parametric classifiers, is presented. Also, the effect of the distance-space mapping in a one-class classifier is discussed through the approximation of the performance of a distance-ranking procedure
DIRA: Dynamic Domain Incremental Regularised Adaptation
Autonomous systems (AS) often use Deep Neural Network (DNN) classifiers to
allow them to operate in complex, high-dimensional, non-linear, and dynamically
changing environments. Due to the complexity of these environments, DNN
classifiers may output misclassifications during operation when they face
domains not identified during development. Removing a system from operation for
retraining becomes impractical as the number of such AS increases. To increase
AS reliability and overcome this limitation, DNN classifiers need to have the
ability to adapt during operation when faced with different operational domains
using a few samples (e.g. 100 samples). However, retraining DNNs on a few
samples is known to cause catastrophic forgetting. In this paper, we introduce
Dynamic Incremental Regularised Adaptation (DIRA), a framework for operational
domain adaption of DNN classifiers using regularisation techniques to overcome
catastrophic forgetting and achieve adaptation when retraining using a few
samples of the target domain. Our approach shows improvements on different
image classification benchmarks aimed at evaluating robustness to distribution
shifts (e.g.CIFAR-10C/100C, ImageNet-C), and produces state-of-the-art
performance in comparison with other frameworks from the literature
- …