51,871 research outputs found

    A False Acceptance Error Controlling Method for Hyperspherical Classifiers

    Get PDF
    Controlling false acceptance errors is of critical importance in many pattern recognition applications, including signature and speaker verification problems. Toward this goal, this paper presents two post-processing methods to improve the performance of hyperspherical classifiers in rejecting patterns from unknown classes. The first method uses a self-organizational approach to design minimum radius hyperspheres, reducing the redundancy of the class region defined by the hyperspherical classifiers. The second method removes additional redundant class regions from the hyperspheres by using a clustering technique to generate a number of smaller hyperspheres. Simulation and experimental results demonstrate that by removing redundant regions these two post-processing methods can reduce the false acceptance error without significantly increasing the false rejection error

    Supervised Classification: Quite a Brief Overview

    Full text link
    The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects. Classifiers are the tools that implement the actual functional mapping from these measurements---also called features or inputs---to the so-called class label---or output. The fields of pattern recognition and machine learning study ways of constructing such classifiers. The main idea behind supervised methods is that of learning from examples: given a number of example input-output relations, to what extent can the general mapping be learned that takes any new and unseen feature vector to its correct class? This chapter provides a basic introduction to the underlying ideas of how to come to a supervised classification problem. In addition, it provides an overview of some specific classification techniques, delves into the issues of object representation and classifier evaluation, and (very) briefly covers some variations on the basic supervised classification task that may also be of interest to the practitioner

    Assessing the Number of Components in Mixture Models: a Review.

    Get PDF
    Despite the widespread application of finite mixture models, the decision of how many classes are required to adequately represent the data is, according to many authors, an important, but unsolved issue. This work aims to review, describe and organize the available approaches designed to help the selection of the adequate number of mixture components (including Monte Carlo test procedures, information criteria and classification-based criteria); we also provide some published simulation results about their relative performance, with the purpose of identifying the scenarios where each criterion is more effective (adequate).Finite mixture; number of mixture components; information criteria; simulation studies.

    A CASE STUDY ON SUPPORT VECTOR MACHINES VERSUS ARTIFICIAL NEURAL NETWORKS

    Get PDF
    The capability of artificial neural networks for pattern recognition of real world problems is well known. In recent years, the support vector machine has been advocated for its structure risk minimization leading to tolerance margins of decision boundaries. Structures and performances of these pattern classifiers depend on the feature dimension and training data size. The objective of this research is to compare these pattern recognition systems based on a case study. The particular case considered is on classification of hypertensive and normotensive right ventricle (RV) shapes obtained from Magnetic Resonance Image (MRI) sequences. In this case, the feature dimension is reasonable, but the available training data set is small, however, the decision surface is highly nonlinear.For diagnosis of congenital heart defects, especially those associated with pressure and volume overload problems, a reliable pattern classifier for determining right ventricle function is needed. RV¥Šs global and regional surface to volume ratios are assessed from an individual¥Šs MRI heart images. These are used as features for pattern classifiers. We considered first two linear classification methods: the Fisher linear discriminant and the linear classifier trained by the Ho-Kayshap algorithm. When the data are not linearly separable, artificial neural networks with back-propagation training and radial basis function networks were then considered, providing nonlinear decision surfaces. Thirdly, a support vector machine was trained which gives tolerance margins on both sides of the decision surface. We have found in this case study that the back-propagation training of an artificial neural network depends heavily on the selection of initial weights, even though randomized. The support vector machine where radial basis function kernels are used is easily trained and provides decision tolerance margins, in spite of only small margins

    Examining the segment retention problem for the “Group Satellite” case

    Get PDF
    The purpose of this work is to determine how well, criteria designed to help the selection of the adequate number of market segments, perform in recovering small niche segments, in mixture regressions of normal data, with experimental data. The simulation experiment compares several segment retention criteria, including information criteria and classification-based criteria. We also address the impact of distributional misspecification on segment retention criteria success rates. This study shows that Akaike’s Information criterion with penalty factors of 3 and 4, rather than the traditional value of 2, are the best segment retention criteria to use in recovering small niche segments. Although these criteria were designed for the specific context of mixture models, they are rarely applied in the marketing literature.Information criteria; Latent Class Segmentation.

    Functional Bipartite Ranking: a Wavelet-Based Filtering Approach

    Full text link
    It is the main goal of this article to address the bipartite ranking issue from the perspective of functional data analysis (FDA). Given a training set of independent realizations of a (possibly sampled) second-order random function with a (locally) smooth autocorrelation structure and to which a binary label is randomly assigned, the objective is to learn a scoring function s with optimal ROC curve. Based on linear/nonlinear wavelet-based approximations, it is shown how to select compact finite dimensional representations of the input curves adaptively, in order to build accurate ranking rules, using recent advances in the ranking problem for multivariate data with binary feedback. Beyond theoretical considerations, the performance of the learning methods for functional bipartite ranking proposed in this paper are illustrated by numerical experiments

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Rapid mapping of digital integrated circuit logic gates via multi-spectral backside imaging

    Full text link
    Modern semiconductor integrated circuits are increasingly fabricated at untrusted third party foundries. There now exist myriad security threats of malicious tampering at the hardware level and hence a clear and pressing need for new tools that enable rapid, robust and low-cost validation of circuit layouts. Optical backside imaging offers an attractive platform, but its limited resolution and throughput cannot cope with the nanoscale sizes of modern circuitry and the need to image over a large area. We propose and demonstrate a multi-spectral imaging approach to overcome these obstacles by identifying key circuit elements on the basis of their spectral response. This obviates the need to directly image the nanoscale components that define them, thereby relaxing resolution and spatial sampling requirements by 1 and 2 - 4 orders of magnitude respectively. Our results directly address critical security needs in the integrated circuit supply chain and highlight the potential of spectroscopic techniques to address fundamental resolution obstacles caused by the need to image ever shrinking feature sizes in semiconductor integrated circuits
    • 

    corecore