21 research outputs found

    Rectifying classifier chains for multi-label classification

    Get PDF
    Classifier chains have recently been proposed as an appealing method for tackling the multi-label classification task. In addition to several empirical studies showing its state-of-the-art performance, especially when being used in its ensemble variant, there are also some first results on theoretical properties of classifier chains. Continuing along this line, we analyze the influence of a potential pitfall of the learning process, namely the discrepancy between the feature spaces used in training and testing: While true class labels are used as supplementary attributes for training the binary models along the chain, the same models need to rely on estimations of these labels at prediction time. We elucidate under which circumstances the attribute noise thus created can affect the overall prediction performance. As a result of our findings, we propose two modifications of classifier chains that are meant to overcome this problem. Experimentally, we show that our variants are indeed able to produce better results in cases where the original chaining process is likely to fai

    Filtrando atributos para mejorar procesos de aprendizaje

    Get PDF
    IX Conferencia de la Asociación Española para la Inteligencia Artificial. Gijón, EspañaLos sistemas de aprendizaje automático han sido tradicionalmente usados para extraer conocimiento a partir de conjuntos de ejemplos descritos mediante atributos. Cuando la información de partida representa un problema real no se sabe, generalmente, qué atributos influyen en su resolución. En esos casos, la única opción a priori es utilizar toda la información disponible. Para evitar los problemas que esto conlleva se puede emplear un filtrado de atributos, previo al aprendizaje, que nos permita quedarnos sólo con los atributos más relevantes, aquellos que encierran la solución del problema. En este artículo se describe un método que realiza esta selección. Como se mostrará, está técnica mejora los procesos posteriores de aprendizaj

    A review onquantification learning

    Get PDF
    The task of quantification consists in providing an aggregate estimation (e.g. the class distribution in a classification problem) for unseen test sets, applying a model that is trained using a training set with a different data distribution. Several real-world applications demand this kind of methods that do not require predictions for individual examples and just focus on obtaining accurate estimates at an aggregate level. During the past few years, several quantification methods have been proposed from different perspectives and with different goals. This paper presents a unified review of the main approaches with the aim of serving as an introductory tutorial for newcomers in the fiel

    Using A* for inference in probabilistic classifier chains

    Get PDF
    IJCAI-15, Buenos Aires, Argentina, 25–31 de julio de 2015Probabilistic Classifiers Chains (PCC) offers interesting properties to solve multi-label classification tasks due to its ability to estimate the joint probability of the labels. However, PCC presents the major drawback of having a high computational cost in the inference process required to predict new samples. Lately, several approaches have been proposed to overcome this issue, including beam search and an -Approximate algorithm based on uniform-cost search. Surprisingly, the obvious possibility of using heuristic search has not been considered yet. This paper studies this alternative and proposes an admisible heuristic that, applied in combination with A* algorithm, guarantees, not only optimal predictions in terms of subset 0/1 loss, but also that it always explores less nodes than -Approximate algorithm. In the experiments reported, the number of nodes explored by our method is less than two times the number of labels for all datasets analyzed. But, the difference in explored nodes must be large enough to compensate the overhead of the heuristic in order to improve prediction time. Thus, our proposal may be a good choice for complex multi-label problem

    Optimizing different loss functions in multilabel classifications

    Get PDF
    Multilabel classification (ML) aims to assign a set of labels to an instance. This generalization of multiclass classification yields to the redefinition of loss functions and the learning tasks become harder. The objective of this paper is to gain insights into the relations of optimization aims and some of the most popular performance measures: subset (or 0/1), Hamming, and the example-based F-measure. To make a fair comparison, we implemented three ML learners for optimizing explicitly each one of these measures in a common framework. This can be done considering a subset of labels as a structured output. Then, we use structured output support vector machines tailored to optimize a given loss function. The paper includes an exhaustive experimental comparison. The conclusion is that in most cases, the optimization of the Hamming loss produces the best or competitive scores. This is a practical result since the Hamming loss can be minimized using a bunch of binary classifiers, one for each label separately, and therefore, it is a scalable and fast method to learn ML tasks. Additionally, we observe that in noise-free learning tasks optimizing the subset loss is the best option, but the differences are very small. We have also noticed that the biggest room for improvement can be found when the goal is to optimize an F-measure in noisy learning task

    Automatic plankton quantification using deep features

    Get PDF
    The study of marine plankton data is vital to monitor the health of the world’s oceans. In recent decades, automatic plankton recognition systems have proved useful to address the vast amount of data collected by specially engineered in situ digital imaging systems. At the beginning, these systems were developed and put into operation using traditional automatic classification techniques, which were fed with handdesigned local image descriptors (such as Fourier features), obtaining quite successful results. In the past few years, there have been many advances in the computer vision community with the rebirth of neural networks. In this paper, we leverage how descriptors computed using Convolutional Neural Networks (CNNs) trained with out-of-domain data are useful to replace hand-designed descriptors in the task of estimating the prevalence of each plankton class in a water sample. To achieve this goal, we have designed a broad set of experiments that show how effective these deep features are when working in combination with state-of-the-art quantification algorithms

    Utilización de técnicas de Inteligencia Artificial en la clasificación de canales bovinas

    Get PDF
    En esta comunicación se presenta una aplicación de técnicas de Inteligencia Artificial en la industria alimentaria. Se ha desarrollado una metodología de representación de la conformación de canales bovinas, sintetizándose el conocimiento de los expertos mediante herramientas de Aprendizaje Automático. Los resultados obtenidos demuestran la viabilidad de utilizar clasificadores automáticos, que son capaces de realizar su tarea de manera eficaz con una reducción importante del número de atributos inicial. Este trabajo abre un amplio abanico de posibilidades de aplicación del Aprendizaje Automático en la industria de la alimentació

    Binary relevance efficacy for multilabel classification

    Get PDF
    The goal of multilabel (ML) classi cation is to induce models able to tag objects with the labels that better describe them. The main baseline for ML classi- cation is Binary Relevance (BR), which is commonly criticized in the literature because of its label independence assumption. Despite this fact, this paper discusses some interesting properties of BR, mainly that it produces optimal models for several ML loss functions. Additionally, we present an analytical study about ML benchmarks datasets, pointing out some shortcomings. As a result, this paper proposes the use of synthetic datasets to better analyze the behavior of ML methods in domains with di erent characteristics. To support this claim, we perform some experiments using synthetic data proving the competitive performance of BR with respect to a more complex method in di cult problems with many labels, a conclusion which was not stated by previous studie

    Learning to assess from pair-wise comparisons

    Get PDF
    In this paper we present an algorithm for learning a function able to assess objects. We assume that our teachers can provide a collection of pairwise comparisons but encounter certain difficulties in assigning a number to the qualities of the objects considered. This is a typical situation when dealing with food products, where it is very interesting to have repeatable, reliable mechanisms that are as objective as possible to evaluate quality in order to provide markets with products of a uniform quality. The same problem arises when we are trying to learn user preferences in an information retrieval system or in configuring a complex device. The algorithm is implemented using a growing variant of Kohonen’s Self-Organizing Maps (growing neural gas), and is tested with a variety of data sets to demonstrate the capabilities of our approac
    corecore