635 research outputs found

    Financial predictions using cost sensitive neural networks for multi-class learning

    Get PDF
    The interest in the localisation of wireless sensor networks has grown in recent years. A variety of machine-learning methods have been proposed in recent years to improve the optimisation of the complex behaviour of wireless networks. Network administrators have found that traditional classification algorithms may be limited with imbalanced datasets. In fact, the problem of imbalanced data learning has received particular interest. The purpose of this study was to examine design modifications to neural networks in order to address the problem of cost optimisation decisions and financial predictions. The goal was to compare four learning-based techniques using cost-sensitive neural network ensemble for multiclass imbalance data learning. The problem is formulated as a combinatorial cost optimisation in terms of minimising the cost using meta-learning classification rules for Naïve Bayes, J48, Multilayer Perceptions, and Radial Basis Function models. With these models, optimisation faults and cost evaluations for network training are considered

    Can Tabular Generative Models Generate Realistic Synthetic Near Infrared Spectroscopic Data?

    Get PDF
    In this thesis, we evaluated the performance of two generative models, Conditional Tabular Gen- erative Adversarial Network (CTGAN) and Tabular Variational Autoencoder (TVAE), from the open-source library Synthetic Data Vault (SDV), for generating synthetic Near Infrared (NIR) spectral data. The aim was to assess the viability of these models in synthetic data generation for predicting Dry Matter Content (DMC) in the field of NIR spectroscopy. The fidelity and utility of the synthetic data were examined through a series of benchmarks, including statistical comparisons, dimensionality reduction, and machine learning tasks. The results showed that while both CTGAN and TVAE could generate synthetic data with statistical properties similar to real data, TVAE outperformed CTGAN in terms of preserving the correlation structure of the data and the relationship between the features and the target variable, DMC. However, the synthetic data fell short in fooling machine learning classifiers, indicating a persisting challenge in synthetic data generation. With respect to utility, neither synthetic dataset produced by CTGAN or TVAE could serve as a satisfactory substitute for real data in training machine learning models for predicting DMC. Although TVAE-generated synthetic data showed some potential when used with Random For- est (RF) and K-Nearest Neighbors (KNN) classifiers, the performance was still inadequate for practical use. This study offers valuable insights into the use of generative models for synthetic NIR spectral data generation, highlighting their current limitations and potential areas for future research

    Predicting Flavonoid UGT Regioselectivity with Graphical Residue Models and Machine Learning.

    Get PDF
    Machine learning is applied to a challenging and biologically significant protein classification problem: the prediction of flavonoid UGT acceptor regioselectivity from primary protein sequence. Novel indices characterizing graphical models of protein residues are introduced. The indices are compared with existing amino acid indices and found to cluster residues appropriately. A variety of models employing the indices are then investigated by examining their performance when analyzed using nearest neighbor, support vector machine, and Bayesian neural network classifiers. Improvements over nearest neighbor classifications relying on standard alignment similarity scores are reported

    Automatic data cleaning

    Get PDF

    Topological Feature Selection: A Graph-Based Filter Feature Selection Approach

    Full text link
    In this paper, we introduce a novel unsupervised, graph-based filter feature selection technique which exploits the power of topologically constrained network representations. We model dependency structures among features using a family of chordal graphs (the Triangulated Maximally Filtered Graph), and we maximise the likelihood of features' relevance by studying their relative position inside the network. Such an approach presents three aspects that are particularly satisfactory compared to its alternatives: (i) it is highly tunable and easily adaptable to the nature of input data; (ii) it is fully explainable, maintaining, at the same time, a remarkable level of simplicity; (iii) it is computationally cheaper compared to its alternatives. We test our algorithm on 16 benchmark datasets from different applicative domains showing that it outperforms or matches the current state-of-the-art under heterogeneous evaluation conditions.Comment: 23 pages, 2 figures, 13 table

    Learning to Rank for Active Learning via Multi-Task Bilevel Optimization

    Full text link
    Active learning is a promising paradigm to reduce the labeling cost by strategically requesting labels to improve model performance. However, existing active learning methods often rely on expensive acquisition function to compute, extensive modeling retraining and multiple rounds of interaction with annotators. To address these limitations, we propose a novel approach for active learning, which aims to select batches of unlabeled instances through a learned surrogate model for data acquisition. A key challenge in this approach is developing an acquisition function that generalizes well, as the history of data, which forms part of the utility function's input, grows over time. Our novel algorithmic contribution is a bilevel multi-task bilevel optimization framework that predicts the relative utility -- measured by the validation accuracy -- of different training sets, and ensures the learned acquisition function generalizes effectively. For cases where validation accuracy is expensive to evaluate, we introduce efficient interpolation-based surrogate models to estimate the utility function, reducing the evaluation cost. We demonstrate the performance of our approach through extensive experiments on standard active classification benchmarks. By employing our learned utility function, we show significant improvements over traditional techniques, paving the way for more efficient and effective utility maximization in active learning applications

    GLocalX - From Local to Global Explanations of Black Box AI Models

    Get PDF
    Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications
    • …
    corecore