612 research outputs found
Revisiting Data Complexity Metrics Based on Morphology for Overlap and Imbalance: Snapshot, New Overlap Number of Balls Metrics and Singular Problems Prospect
Data Science and Machine Learning have become fundamental assets for
companies and research institutions alike. As one of its fields, supervised
classification allows for class prediction of new samples, learning from given
training data. However, some properties can cause datasets to be problematic to
classify.
In order to evaluate a dataset a priori, data complexity metrics have been
used extensively. They provide information regarding different intrinsic
characteristics of the data, which serve to evaluate classifier compatibility
and a course of action that improves performance. However, most complexity
metrics focus on just one characteristic of the data, which can be insufficient
to properly evaluate the dataset towards the classifiers' performance. In fact,
class overlap, a very detrimental feature for the classification process
(especially when imbalance among class labels is also present) is hard to
assess.
This research work focuses on revisiting complexity metrics based on data
morphology. In accordance to their nature, the premise is that they provide
both good estimates for class overlap, and great correlations with the
classification performance. For that purpose, a novel family of metrics have
been developed. Being based on ball coverage by classes, they are named after
Overlap Number of Balls. Finally, some prospects for the adaptation of the
former family of metrics to singular (more complex) problems are discussed.Comment: 23 pages, 9 figures, preprin
Machine Learning Predicts Reach-Scale Channel Types From Coarse-Scale Geospatial Data in a Large River Basin
Hydrologic and geomorphic classifications have gained traction in response to the increasing need for basin-wide water resources management. Regardless of the selected classification scheme, an open scientific challenge is how to extend information from limited field sites to classify tens of thousands to millions of channel reaches across a basin. To address this spatial scaling challenge, this study leverages machine learning to predict reach-scale geomorphic channel types using publicly available geospatial data. A bottom-up machine learning approach selects the most accurate and stable model among∼20,000 combinations of 287 coarse geospatial predictors, preprocessing methods, and algorithms in a three-tiered framework to (i) define a tractable problem and reduce predictor noise, (ii) assess model performance in statistical learning, and (iii) assess model performance in prediction. This study also addresses key issues related to the design, interpretation, and diagnosis of machine learning models in hydrologic sciences. In an application to the Sacramento River basin (California, USA), the developed framework selects a Random Forest model to predict 10 channel types previously determined from 290 field surveys over 108,943 two hundred-meter reaches. Performance in statistical learning is reasonable with a 61% median cross-validation accuracy, a sixfold increase over the 10% accuracy of the baseline random model, and the predictions coherently capture the large-scale geomorphic organization of the landscape. Interestingly, in the study area, the persistent roughness of the topography partially controls channel types and the variation in the entropy-based predictive performance is explained by imperfect training information and scale mismatch between labels and predictors
Extracting Features from Textual Data in Class Imbalance Problems
[EN] We address class imbalance problems. These are classification problems where the target variable is binary, and one class dominates over the other. A central objective in these problems is to identify features that yield models with high precision/recall values, the standard yardsticks for assessing such models. Our features are extracted from the textual data inherent in such problems. We use n-gram frequencies as features and introduce a discrepancy score that measures the efficacy of an n-gram in highlighting the minority class. The frequency counts of n-grams with the highest discrepancy scores are used as features to construct models with the desired metrics. According to the best practices followed by the services industry, many customer support tickets will get audited and tagged as contract-compliant whereas some will be tagged as over-delivered . Based on in-field data, we use a random forest classifier and perform a randomized grid search over the model hyperparameters. The model scoring is performed using an scoring function. Our objective is to minimize the follow-up costs by optimizing the recall score while maintaining a base-level precision score. The final optimized model achieves an acceptable recall score while staying above the target precision. We validate our feature selection method by comparing our model with one constructed using frequency counts of n-grams chosen randomly. We propose extensions of our feature extraction method to general classification (binary and multi-class) and regression problems. The discrepancy score is one measure of dissimilarity of distributions and other (more general) measures that we formulate could potentially yield more effective models.Aravamuthan, S.; Jogalekar, P.; Lee, J. (2022). Extracting Features from Textual Data in Class Imbalance Problems. Journal of Computer-Assisted Linguistic Research. 6:42-58. https://doi.org/10.4995/jclr.2022.182004258
Introspective knowledge acquisition for case retrieval networks in textual case base reasoning.
Textual Case Based Reasoning (TCBR) aims at effective reuse of information contained in unstructured documents. The key advantage of TCBR over traditional Information Retrieval systems is its ability to incorporate domain-specific knowledge to facilitate case comparison beyond simple keyword matching. However, substantial human intervention is needed to acquire and transform this knowledge into a form suitable for a TCBR system. In this research, we present automated approaches that exploit statistical properties of document collections to alleviate this knowledge acquisition bottleneck. We focus on two important knowledge containers: relevance knowledge, which shows relatedness of features to cases, and similarity knowledge, which captures the relatedness of features to each other. The terminology is derived from the Case Retrieval Network (CRN) retrieval architecture in TCBR, which is used as the underlying formalism in this thesis applied to text classification. Latent Semantic Indexing (LSI) generated concepts are a useful resource for relevance knowledge acquisition for CRNs. This thesis introduces a supervised LSI technique called sprinkling that exploits class knowledge to bias LSI's concept generation. An extension of this idea, called Adaptive Sprinkling has been proposed to handle inter-class relationships in complex domains like hierarchical (e.g. Yahoo directory) and ordinal (e.g. product ranking) classification tasks. Experimental evaluation results show the superiority of CRNs created with sprinkling and AS, not only over LSI on its own, but also over state-of-the-art classifiers like Support Vector Machines (SVM). Current statistical approaches based on feature co-occurrences can be utilized to mine similarity knowledge for CRNs. However, related words often do not co-occur in the same document, though they co-occur with similar words. We introduce an algorithm to efficiently mine such indirect associations, called higher order associations. Empirical results show that CRNs created with the acquired similarity knowledge outperform both LSI and SVM. Incorporating acquired knowledge into the CRN transforms it into a densely connected network. While improving retrieval effectiveness, this has the unintended effect of slowing down retrieval. We propose a novel retrieval formalism called the Fast Case Retrieval Network (FCRN) which eliminates redundant run-time computations to improve retrieval speed. Experimental results show FCRN's ability to scale up over high dimensional textual casebases. Finally, we investigate novel ways of visualizing and estimating complexity of textual casebases that can help explain performance differences across casebases. Visualization provides a qualitative insight into the casebase, while complexity is a quantitative measure that characterizes classification or retrieval hardness intrinsic to a dataset. We study correlations of experimental results from the proposed approaches against complexity measures over diverse casebases
Simple but Not Simplistic: Reducing the Complexity of Machine Learning Methods
Programa Oficial de Doutoramento en Computación . 5009V01[Resumo]
A chegada do Big Data e a explosión do Internet das cousas supuxeron un gran
reto para os investigadores en Aprendizaxe Automática, facendo que o proceso de
aprendizaxe sexa mesmo roáis complexo. No mundo real, os problemas da aprendizaxe
automática xeralmente teñen complexidades inherentes, como poden ser as
caracterÃsticas intrÃnsecas dos datos, o gran número de mostras, a alta dimensión dos
datos de entrada, os cambios na distribución entre o conxunto de adestramento e
test, etc. Todos estes aspectos son importantes, e requiren novoS modelos que poi dan
facer fronte a estas situacións. Nesta tese, abordáronse todos estes problemas, tratando
de simplificar o proceso de aprendizaxe automática no escenario actual. En
primeiro lugar, realÃzase unha análise de complexidade para observar como inflúe
esta na tarefa de clasificación, e se é posible que a aplicación dun proceso previo
de selección de caracterÃsticas reduza esta complexidade. Logo, abórdase o proceso
de simplificación da fase de aprendizaxe automática mediante a filosofÃa divide e
vencerás, usando un enfoque distribuÃdo. Seguidamente, aplicamos esa mesma filosofÃa
sobre o proceso de selección de caracterÃsticas. Finalmente, optamos por un
enfoque diferente seguindo a filosofÃa do Edge Computing, a cal permite que os datos
producidos polos dispositivos do Internet das cousas se procesen máis preto de
onde se crearon. Os enfoques propostos demostraron a súa capacidade para reducir
a complexidade dos métodos de aprendizaxe automática tradicionais e, polo tanto,
espérase que a contribución desta tese abra as portas ao desenvolvemento de novos
métodos de aprendizaxe máquina máis simples, máis robustos, e máis eficientes
computacionalmente.[Resumen]
La llegada del Big Data y la explosión del Internet de las cosas han supuesto
un gran reto para los investigadores en Aprendizaje Automático, haciendo que el
proceso de aprendizaje sea incluso más complejo. En el mundo real, los problemas de
aprendizaje automático generalmente tienen complejidades inherentes) como pueden
ser las caracterÃsticas intrÃnsecas de los datos, el gran número de muestras, la alta
dimensión de los datos de entrada, los cambios en la distribución entre el conjunto de
entrenamiento y test, etc. Todos estos aspectos son importantes, y requieren nuevos
modelos que puedan hacer frente a estas situaciones. En esta tesis, se han abordado
todos estos problemas, tratando de simplificar el proceso de aprendizaje automático
en el escenario actual. En primer lugar, se realiza un análisis de complejidad para
observar cómo influye ésta en la tarea de clasificación1 y si es posible que la aplicación
de un proceso previo de selección de caracterÃsticas reduzca esta complejidad.
Luego, se aborda el proceso de simplificación de la fase de aprendizaje automático
mediante la filosofÃa divide y vencerás, usando un enfoque distribuido. A continuación,
aplicamos esa misma filosofÃa sobre el proceso de selección de caracterÃsticas.
Finalmente, optamos por un enfoque diferente siguiendo la filosofÃa del Edge Computing,
la cual permite que los datos producidos por los dispositivos del Internet de
las cosas se procesen más cerca de donde se crearon. Los enfoques propuestos han
demostrado su capacidad para reducir la complejidad de los métodos de aprendizaje
automático tnidicionales y, por lo tanto, se espera que la contribución de esta
tesis abra las puertas al desarrollo de nuevos métodos de aprendizaje máquina más
simples, más robustos, y más eficientes computacionalmente.[Abstract]
The advent of Big Data and the explosion of the Internet of Things, has brought
unprecedented challenges to Machine Learning researchers, making the learning task
more complexo Real-world machine learning problems usually have inherent complexities,
such as the intrinsic characteristics of the data, large number of instauces,
high input dimensionality, dataset shift, etc. AH these aspects matter, and can
fOI new models that can confront these situations. Thus, in this thesis, we have
addressed aH these issues) simplifying the machine learning process in the current
scenario. First, we carry out a complexity analysis to see how it inftuences the
classification models, and if it is possible that feature selection might result in a
deerease of that eomplexity. Then, we address the proeess of simplifying learning
with the divide-and-conquer philosophy of the distributed approaeh. Later, we aim
to reduce the complexity of the feature seleetion preprocessing through the same
philosophy. FinallYl we opt for a different approaeh following the eurrent philosophy
Edge eomputing, whieh allows the data produeed by Internet of Things deviees
to be proeessed closer to where they were ereated. The proposed approaehes have
demonstrated their eapability to reduce the complexity of traditional maehine learning
algorithms, and thus it is expeeted that the eontribution of this thesis will open
the doors to the development of new maehine learning methods that are simpler,
more robust, and more eomputationally efficient
How We Choose One over Another: Predicting Trial-by-Trial Preference Decision
Preference formation is a complex problem as it is subjective, involves emotion, is led by implicit processes, and changes depending on the context even within the same individual. Thus, scientific attempts to predict preference are challenging, yet quite important for basic understanding of human decision making mechanisms, but prediction in a group-average sense has only a limited significance. In this study, we predicted preferential decisions on a trial by trial basis based on brain responses occurring before the individuals made their decisions explicit. Participants made a binary preference decision of approachability based on faces while their electrophysiological responses were recorded. An artificial neural network based pattern-classifier was used with time-frequency resolved patterns of a functional connectivity measure as features for the classifier. We were able to predict preference decisions with a mean accuracy of 74.3±2.79% at participant-independent level and of 91.4±3.8% at participant-dependent level. Further, we revealed a causal role of the first impression on final decision and demonstrated the temporal trajectory of preference decision formation
Metalearning
This open access book as one of the fastest-growing areas of research in machine learning, metalearning studies principled methods to obtain efficient models and solutions by adapting machine learning and data mining processes. This adaptation usually exploits information from past experience on other tasks and the adaptive processes can involve machine learning approaches. As a related area to metalearning and a hot topic currently, automated machine learning (AutoML) is concerned with automating the machine learning processes. Metalearning and AutoML can help AI learn to control the application of different learning methods and acquire new solutions faster without unnecessary interventions from the user. This book offers a comprehensive and thorough introduction to almost all aspects of metalearning and AutoML, covering the basic concepts and architecture, evaluation, datasets, hyperparameter optimization, ensembles and workflows, and also how this knowledge can be used to select, combine, compose, adapt and configure both algorithms and models to yield faster and better solutions to data mining and data science problems. It can thus help developers to develop systems that can improve themselves through experience. This book is a substantial update of the first edition published in 2009. It includes 18 chapters, more than twice as much as the previous version. This enabled the authors to cover the most relevant topics in more depth and incorporate the overview of recent research in the respective area. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining, data science and artificial intelligence. ; Metalearning is the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning and data mining processes. While the variety of machine learning and data mining techniques now available can, in principle, provide good model solutions, a methodology is still needed to guide the search for the most appropriate model in an efficient way. Metalearning provides one such methodology that allows systems to become more effective through experience. This book discusses several approaches to obtaining knowledge concerning the performance of machine learning and data mining algorithms. It shows how this knowledge can be reused to select, combine, compose and adapt both algorithms and models to yield faster, more effective solutions to data mining problems. It can thus help developers improve their algorithms and also develop learning systems that can improve themselves. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining and artificial intelligence
- …