26,541 research outputs found

    Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    Get PDF
    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Cost-Sensitive Selective Classification and its Applications to Online Fraud Management

    Get PDF
    abstract: Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    The Organizational Design of Intelligence Failures

    Get PDF
    While the detection, and prevention, of the September 11, 2001 plot would have been ideal, I argue that the more major intelligence failures occurred after the attacks of September 11. The erroneous intelligence concerning the WMD presence in Iraq permitted the Bush Administration to order the invasion of Iraq. Systematic underestimates of the budgetary costs and personnel requirements of the war meant that Congress did not give the matter the debate that it warranted. Finally, incorrect (or incomplete) intelligence concerning the extent of the informal opposition to the U.S. led forces resulted in inadequate numbers of allied forces being deployed and a protracted period of conflict and disruption in Iraq. These facts are all well known to anyone who reads newspapers. I make three arguments in this paper. First, the collection of the intelligence data and its evaluation does not occur in a vacuum. There must always be an organizing theory that motivates the collection and evaluation of the data and that this theory is formulated at the highest levels of the decision making process. Second, it is not possible to construct a truly neutral or objective (analytical) hierarchy. Third, it is impossible to separate the analytical evaluation of the data from the decision that will be based on such evaluation. As an inevitable consequence of these arguments, intelligence analysis and the resulting conclusions are driven by top-down considerations rather than bottom-up as has been argued by some reviewers of recent intelligence failures. Key Words: stable coalitions, self-enforcing agreements, compliance, enforcement, public goods
    corecore