222 research outputs found
Comprehensible credit scoring models using rule extraction from support vector machines.
In recent years, Support Vector Machines (SVMs) were successfully applied to a wide range of applications. Their good performance is achieved by an implicit non-linear transformation of the original problem to a high-dimensional (possibly infinite) feature space in which a linear decision hyperplane is constructed that yields a nonlinear classifier in the input space. However, since the classifier is described as a complex mathematical function, it is rather incomprehensible for humans. This opacity property prevents them from being used in many real- life applications where both accuracy and comprehensibility are required, such as medical diagnosis and credit risk evaluation. To overcome this limitation, rules can be extracted from the trained SVM that are interpretable by humans and keep as much of the accuracy of the SVM as possible. In this paper, we will provide an overview of the recently proposed rule extraction techniques for SVMs and introduce two others taken from the artificial neural networks domain, being Trepan and G-REX. The described techniques are compared using publicly avail- able datasets, such as Ripley's synthetic dataset and the multi-class iris dataset. We will also look at medical diagnosis and credit scoring where comprehensibility is a key requirement and even a regulatory recommendation. Our experiments show that the SVM rule extraction techniques lose only a small percentage in performance compared to SVMs and therefore rank at the top of comprehensible classification techniques.Credit; Credit scoring; Models; Model; Applications; Performance; Space; Decision; Yield; Real life; Risk; Evaluation; Rules; Neural networks; Networks; Classification; Research;
Matching Image Sets via Adaptive Multi Convex Hull
Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
201
Clustering is difficult only when it does not matter
Numerous papers ask how difficult it is to cluster data. We suggest that the
more relevant and interesting question is how difficult it is to cluster data
sets {\em that can be clustered well}. More generally, despite the ubiquity and
the great importance of clustering, we still do not have a satisfactory
mathematical theory of clustering. In order to properly understand clustering,
it is clearly necessary to develop a solid theoretical basis for the area. For
example, from the perspective of computational complexity theory the clustering
problem seems very hard. Numerous papers introduce various criteria and
numerical measures to quantify the quality of a given clustering. The resulting
conclusions are pessimistic, since it is computationally difficult to find an
optimal clustering of a given data set, if we go by any of these popular
criteria. In contrast, the practitioners' perspective is much more optimistic.
Our explanation for this disparity of opinions is that complexity theory
concentrates on the worst case, whereas in reality we only care for data sets
that can be clustered well.
We introduce a theoretical framework of clustering in metric spaces that
revolves around a notion of "good clustering". We show that if a good
clustering exists, then in many cases it can be efficiently found. Our
conclusion is that contrary to popular belief, clustering should not be
considered a hard task
Data analytics for powder feeding modelling on continuous secondary pharmaceutical manufacturing processes
In this Thesis, a data-driven procedure of investigating raw materials variability in an industrial database is presented, together with a multivariate statistical modelling approach for the first unit operation of continuous tableting lines. The main objective of the Thesis is to explore the capabilities of using pattern recognition techniques to identify and model hidden patterns of similarities in a powder materials dataset and determine the effect of materials variability on the feeder
Evolutionary Granular Kernel Machines
Kernel machines such as Support Vector Machines (SVMs) have been widely used in various data mining applications with good generalization properties. Performance of SVMs for solving nonlinear problems is highly affected by kernel functions. The complexity of SVMs training is mainly related to the size of a training dataset. How to design a powerful kernel, how to speed up SVMs training and how to train SVMs with millions of examples are still challenging problems in the SVMs research. For these important problems, powerful and flexible kernel trees called Evolutionary Granular Kernel Trees (EGKTs) are designed to incorporate prior domain knowledge. Granular Kernel Tree Structure Evolving System (GKTSES) is developed to evolve the structures of Granular Kernel Trees (GKTs) without prior knowledge. A voting scheme is also proposed to reduce the prediction deviation of GKTSES. To speed up EGKTs optimization, a master-slave parallel model is implemented. To help SVMs challenge large-scale data mining, a Minimum Enclosing Ball (MEB) based data reduction method is presented, and a new MEB-SVM algorithm is designed. All these kernel methods are designed based on Granular Computing (GrC). In general, Evolutionary Granular Kernel Machines (EGKMs) are investigated to optimize kernels effectively, speed up training greatly and mine huge amounts of data efficiently
- …