8,069 research outputs found
Nuevos Modelos de Aprendizaje Híbrido para Clasificación y Ordenamiento Multi-Etiqueta
En la última década, el aprendizaje multi-etiqueta se ha convertido en una importante tarea de investigación, debido en gran parte al creciente número de problemas reales que contienen datos multi-etiqueta. En esta tesis se estudiaron dos problemas sobre datos multi-etiqueta, la mejora del rendimiento de los algoritmos en datos multi-etiqueta complejos y la mejora del rendimiento de los algoritmos a partir de datos no etiquetados. El primer problema fue tratado mediante métodos de estimación de atributos. Se evaluó la efectividad de los métodos de estimación de atributos propuestos en la mejora del rendimiento de los algoritmos de vecindad, mediante la parametrización de las funciones de distancias empleadas para recuperar los ejemplos más cercanos. Además, se demostró la efectividad de los métodos de estimación en la tarea de selección de atributos. Por otra parte, se desarrolló un algoritmo de vecindad inspirado en el enfoque de clasifcación basada en gravitación de datos. Este algoritmo garantiza un balance adecuado entre eficiencia y efectividad en su solución ante datos multi-etiqueta complejos. El segundo problema fue resuelto mediante técnicas de aprendizaje activo, lo cual permite reducir los costos del etiquetado de datos y del entrenamiento de un mejor modelo. Se propusieron dos estrategias de aprendizaje activo. La primer estrategia resuelve el problema de aprendizaje activo multi-etiqueta de una manera efectiva y eficiente, para ello se combinaron dos medidas que representan la utilidad de un ejemplo no etiquetado. La segunda estrategia propuesta se enfocó en la resolución del problema de aprendizaje activo multi-etiqueta en modo de lotes, para ello se formuló un problema multi-objetivo donde se optimizan tres medidas, y el problema de optimización planteado se resolvió mediante un algoritmo evolutivo. Como resultados complementarios derivados de esta tesis, se desarrolló una herramienta computacional que favorece la implementación de métodos de aprendizaje activo y la experimentación en esta tarea de estudio. Además, se propusieron dos aproximaciones que permiten evaluar el rendimiento de las técnicas de aprendizaje activo de una manera más adecuada y robusta que la empleada comunmente en la literatura. Todos los métodos propuestos en esta tesis han sido evaluados en un marco experimental
adecuado, se utilizaron numerosos conjuntos de datos y se compararon
los rendimientos de los algoritmos frente a otros métodos del estado del arte. Los
resultados obtenidos, los cuales fueron verificados mediante la aplicación de test
estadísticos no paramétricos, demuestran la efectividad de los métodos propuestos
y de esta manera comprueban las hipótesis planteadas en esta tesis.In the last decade, multi-label learning has become an important area of research
due to the large number of real-world problems that contain multi-label data. This
doctoral thesis is focused on the multi-label learning paradigm. Two problems were
studied, rstly, improving the performance of the algorithms on complex multi-label
data, and secondly, improving the performance through unlabeled data.
The rst problem was solved by means of feature estimation methods. The e ectiveness
of the feature estimation methods proposed was evaluated by improving
the performance of multi-label lazy algorithms. The parametrization of the distance
functions with a weight vector allowed to recover examples with relevant
label sets for classi cation. It was also demonstrated the e ectiveness of the feature
estimation methods in the feature selection task. On the other hand, a lazy
algorithm based on a data gravitation model was proposed. This lazy algorithm
has a good trade-o between e ectiveness and e ciency in the resolution of the
multi-label lazy learning.
The second problem was solved by means of active learning techniques. The active
learning methods allowed to reduce the costs of the data labeling process and
training an accurate model. Two active learning strategies were proposed. The
rst strategy e ectively solves the multi-label active learning problem. In this
strategy, two measures that represent the utility of an unlabeled example were
de ned and combined. On the other hand, the second active learning strategy proposed
resolves the batch-mode active learning problem, where the aim is to select a
batch of unlabeled examples that are informative and the information redundancy
is minimal. The batch-mode active learning was formulated as a multi-objective
problem, where three measures were optimized. The multi-objective problem was
solved through an evolutionary algorithm.
This thesis also derived in the creation of a computational framework to develop
any active learning method and to favor the experimentation process in the active
learning area. On the other hand, a methodology based on non-parametric
tests that allows a more adequate evaluation of active learning performance was
proposed. All methods proposed were evaluated by means of extensive and adequate experimental
studies. Several multi-label datasets from di erent domains were used, and
the methods were compared to the most signi cant state-of-the-art algorithms. The
results were validated using non-parametric statistical tests. The evidence showed
the e ectiveness of the methods proposed, proving the hypotheses formulated at
the beginning of this thesis
JCLAL: A Java framework for active learning
Active Learning has become an important area of research owing to the increasing number of real-world problems which contain labelled and unlabelled examples at the same time. JCLAL is a Java Class Library for Active Learning which has an architecture that follows strong principles of object-oriented design. It is easy to use, and it allows the developers to adapt, modify and extend the framework according to their needs. The library offers a variety of active learning methods that have been proposed in the literature. The software is available under the GPL license
Cost-Quality Trade-Offs in One-Class Active Learning
Active learning is a paradigm to involve users in a machine learning process. The core idea of active learning is to ask a user to annotate a specific observation to improve the classification performance. One important application of active learning is detecting outliers, i.e., unusual observations that deviate from the regular ones in a data set. Applying active learning for outlier detection in practice requires to design a system that consists of several components: the data, the classifier that discerns between inliers and outliers, the query strategy that selects the observations for feedback collection, and an oracle, e.g., the human expert that annotates the queries. Each of these components and their interplay influences the classification quality. Naturally, there are cost budgets limiting certain parts of the system, e.g., the number of queries one can ask a human. Thus, to configure efficient active learning systems, one must decide on several trade-offs between costs and quality. The existing literature on active learning systems does not provide an overview nor a formal description of the cost-quality trade-offs of active learning. All this makes the configuration of efficient active learning systems in practice difficult.
In this thesis, we study different cost-quality trade-offs that are pivotal for configuring an active learning system for outlier detection. We first provide an overview of the costs of an active learning system. Then, we analyze three important trade-offs and propose ways to model and quantify them. In our first contribution, we study how one can reduce classification training costs by training only on a sample of the data set. We formalize the sampling trade-off between classifier training costs and resulting quality as an optimization problem and propose an efficient algorithm to solve it. Compared to the existing sampling methods in literature, our approach guarantees that a classifier trained on our sample makes the same predictions as if trained on the complete data set. We can therefore reduce the classification training costs without a loss of classification quality. In our second contribution, we investigate how selecting multiple queries allows trading off costs against quality. So-called batch queries reduce classifier training costs because the system only updates the classifier once for each batch. But the annotation of a batch may give redundant information, which reduces the achievable quality with a fixed query budget. We are the first to consider batch queries for outlier detection, a generalization of the more common case to query sequentially. We formalize batch active learning and propose several strategies to construct batches by modeling the expected utility of a batch. In our third contribution, we propose query synthesis for outlier detection. Query synthesis allows to artificially generate queries at any point in the data space without being restricted by a pool of query candidates. We propose a framework to efficiently synthesize queries and develop a novel query strategy to improve the generalization of a classifier beyond a biased data set with active learning. For all contributions, we derive recommendations for the cost-quality trade-offs from formal investigations and empirical studies to facilitate the configuration of robust and efficient active learning systems for outlier detection
Combining active learning suggestions
We study the problem of combining active learning suggestions to identify informative training examples by empirically comparing methods on benchmark datasets. Many active learning heuristics for classification problems have been proposed to help us pick which instance to annotate next. But what is the optimal heuristic for a particular source of data? Motivated by the success of methods that combine predictors, we combine active learners with bandit algorithms and rank aggregation methods. We demonstrate that a combination of active learners outperforms passive learning in large benchmark datasets and removes the need to pick a particular active learner a priori. We discuss challenges to finding good rewards for bandit approaches and show that rank aggregation performs well.The research was supported by the Data to Decisions Cooperative Research Centre whose
activities are funded by the Australian Commonwealth Government’s Cooperative Research
Centres Programme. This research was supported by the Australian Research Council
Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number
CE110001020. The SDSS dataset was extracted from Data Release 12 of SDSS-III. Funding
for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating
Institutions, the National Science Foundation, and the U.S. Department of Energy Office of
Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the
Astrophysical Research Consortium for the Participating Institutions of the SDSS-III
Collaboration including the University of Arizona, the Brazilian Participation Group,
Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the
French Participation Group, the German Participation Group, Harvard University, the
Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation
Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck
Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico
State University, New York University, Ohio State University, Pennsylvania State
University, University of Portsmouth, Princeton University, the Spanish Participation
Group, University of Tokyo, University of Utah, Vanderbilt University, University of
Virginia, University of Washington, and Yale Universit
- …