4 research outputs found

    Minimax Classifier with Box Constraint on the Priors

    Get PDF
    Learning a classifier in safety-critical applications like medicine raises several issues. Firstly, the class proportions, also called priors, are in general imbalanced or uncertain. Sometimes, experts are able to provide some bounds on the priors and taking into account this knowledge can improve the predictions. Secondly, it is also necessary to consider any arbitrary loss function given by experts to evaluate the classification decision. Finally, the dataset may contain both categorical and numeric features. In this paper, we propose a box-constrained minimax classifier which addresses all the mentioned issues. To deal with both categorical and numeric features, many works have shown that discretizing the numeric attributes can lead to interesting results. Here, we thus consider that numeric features are discretized. In order to address the class proportions issues, we compute the priors which maximize the empirical Bayes risk over a box-constrained probabilistic simplex. This constraint is defined as the intersection between the simplex and a box constraint provided by experts, which aims at bounding independently each class proportions. Our approach allows to find a compromise between the empirical Bayes classifier and the standard minimax classifier, which may appear too pessimistic. The standard minimax classifier, which has not been studied yet when considerring discrete features, is still accessible by our approach. When considering only discrete features, we show that, for any arbitrary loss function, the empirical Bayes risk, considered as a function of the priors, is a concave non-differentiable multivariate piecewise affine function. To compute the box-constrained least favorable priors, we derive a projected subgradient algorithm. The convergence of our algorithm is established. The performance of our algorithm is illustrated with experiments on the Framingham study database to predict the risk of Coronary Heart Disease (CHD)

    Box-constrained optimization for minimax supervised learning***

    Get PDF
    In this paper, we present the optimization procedure for computing the discrete boxconstrained minimax classifier introduced in [1, 2]. Our approach processes discrete or beforehand discretized features. A box-constrained region defines some bounds for each class proportion independently. The box-constrained minimax classifier is obtained from the computation of the least favorable prior which maximizes the minimum empirical risk of error over the box-constrained region. After studying the discrete empirical Bayes risk over the probabilistic simplex, we consider a projected subgradient algorithm which computes the prior maximizing this concave multivariate piecewise affine function over a polyhedral domain. The convergence of our algorithm is established

    Categorical Change: Exploring the Effects of Concept Drift in Human Perceptual Category Learning

    Get PDF
    Categorization is an essential survival skill that we engage in daily. A multitude of behavioral and neuropsychological evidence support the existence of multiple learning systems involved in category learning. COmpetition between Verbal and Implicit Systems (COVIS) theory provides a neuropsychological basis for the existence of an explicit and implicit learning system involved in the learning of category rules. COVIS provides a convincing account of asymptotic performance in human category learning. However, COVIS – and virtually all current theories of category learning – focus solely on categories and decision environments that remain stationary over time. However, our environment is dynamic, and we often need to adapt our decision making to account for environmental or categorical changes. Machine learning addresses this significant challenge through what is termed concept drift. Concept drift occurs any time a data distribution changes over time. This dissertation draws from two key characteristics of concept drift in machine learning known to impact the performance of learning models, and in-so-doing provides the first systematic exploration of concept drift (i.e., categorical change) in human perceptual category learning. Four experiments, each including one key change parameter (category base-rates, payoffs, or category structure [RB/II]), investigated the effect of rate of change (abrupt, gradual) and awareness of change (foretold or not) on decision criterion adaptation. Critically, Experiments 3 and 4 evaluated differences in categorical adaptation within explicit and implicit category learning tasks to determine if rate and awareness of change moderated any learning system differences. The results of these experiments inform current category learning theory and provide information for machine learning models of decision support in non-stationary environments
    corecore