8,362 research outputs found

    Can multilayer perceptron ensembles model the ecological niche of freshwater fish species?

    Full text link
    The potential of Multilayer Perceptron (MLP) Ensembles to explore the ecology of freshwater fish specieswas tested by applying the technique to redfin barbel (Barbus haasi Mertens, 1925), an endemic and mon-tane species that inhabits the North-East quadrant of the Iberian Peninsula. Two different MLP Ensembleswere developed. The physical habitat model considered only abiotic variables, whereas the biotic modelalso included the density of the accompanying fish species and several invertebrate predictors. The results showed that MLP Ensembles may outperform single MLPs. Moreover, active selection of MLP candidatesto create an optimal subset of MLPs can further improve model performance. The physical habitat modelconfirmed the redfin barbel preference for middle-to-upper river segments whereas the importance ofdepth confirms that redfin barbel prefers pool-type habitats. Although the biotic model showed higheruncertainty, it suggested that redfin barbel, European eel and the considered cyprinid species have similarhabitat requirements. Due to its high predictive performance and its ability to deal with model uncertainty, the MLP Ensemble is a promising tool for ecological modelling or habitat suitability prediction in environmental flow assessment.This study was funded by the Spanish Ministry of Economy and Competitiveness with the project SCARCE (Consolider-Ingenio 2010 CSD2009-00065) and the Universitat Politecnica de Valencia, through the project UPPTE/2012/294 (PAID-06-12). Additionally, the authors would like to thank the help of the Conselleria de Territori i Vivenda (Generalitat Valenciana) and the Confederacion Hidrografica del Jucar (Spanish government) which provided environmental data. The authors are indebted to all the colleagues who collaborated in the field data collection and the text adequacy; without their help this paper would have not been possible. Last but not least, the authors would like to specifically thank E. Aparicio and A.J. Cannon, the former because he selflessly provided the bibliography about the redfin barbel and the latter because he patiently explained the 'ins and outs' of the monmlp package.Muñoz Mas, R.; Martinez-Capel, F.; Alcaraz-Hernández, JD.; Mouton, AM. (2015). Can multilayer perceptron ensembles model the ecological niche of freshwater fish species?. Ecological Modelling. 309-310:72-81. https://doi.org/10.1016/j.ecolmodel.2015.04.025S7281309-31

    Impact of the learners diversity and combination method on the generation of heterogeneous classifier ensembles

    Get PDF
    Ensembles of classifiers is a proven approach in machine learning with a wide variety of research works. The main issue in ensembles of classifiers is not only the selection of the base classifiers, but also the combination of their outputs. According to the literature, it has been established that much is to be gained from combining classifiers if those classifiers are accurate and diverse. However, it is still an open issue how to define the relation between accuracy and diversity in order to define the best possible ensemble of classifiers. In this paper, we propose a novel approach to evaluate the impact of the diversity of the learners on the generation of heterogeneous ensembles. We present an exhaustive study of this approach using 27 different multiclass datasets and analysing their results in detail. In addition, to determine the performance of the different results, the presence of labelling noise is also considered.This work has been supported under projects PEAVAUTO-CM-UC3M–2020/00036/001, PID2019-104793RB-C31, and RTI2018-096036-B-C22, and by the Region of Madrid’s Excellence Program, Spain (EPUC3M17)

    Generative Adversarial Networks for Financial Trading Strategies Fine-Tuning and Combination

    Get PDF
    Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact into such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha

    Inheritance-Based Diversity Measures for Explicit Convergence Control in Evolutionary Algorithms

    Full text link
    Diversity is an important factor in evolutionary algorithms to prevent premature convergence towards a single local optimum. In order to maintain diversity throughout the process of evolution, various means exist in literature. We analyze approaches to diversity that (a) have an explicit and quantifiable influence on fitness at the individual level and (b) require no (or very little) additional domain knowledge such as domain-specific distance functions. We also introduce the concept of genealogical diversity in a broader study. We show that employing these approaches can help evolutionary algorithms for global optimization in many cases.Comment: GECCO '18: Genetic and Evolutionary Computation Conference, 2018, Kyoto, Japa

    Study and implementation of quantum-inspired boosting algorithms for AI powered Financial Asset Management.

    Get PDF
    openL'Ensemble Learning (EL) è una tecnica di machine learning che prevede la combinazione di più modelli, chiamati weak learners, al fine di produrre previsioni più accurate. L'idea alla base dell'EL si basa sul fatto che aggregando le previsioni di più modelli, la previsione finale può essere più robusta, accurata e generalizzabile rispetto a quella di ciascun weak learner considerato singolarmente. Il boosting è una tecnica di EL in cui l'insieme di modelli viene costruito in modo iterativo, in modo tale che ad ogni iterazione l'addestramento di nuovi learners si concentri sulle istanze di addestramento sulle quali quali i modelli precedentemente selezionati sbagliano più frequentemente. Gli algoritmi di boosting sono stati applicati con successo in vari ambiti, tra cui il riconoscimento di immagini e oggetti, text mining, finanza e altri campi. Sono particolarmente efficaci in scenari in cui l'alta precisione e la stabilità sono cruciali, rendendoli uno strumento prezioso nel campo del machine learning. Qboost è un algoritmo di boosting introdotto per la prima volta da Neven et al. nel 2008, che trasforma il problema dell'EL in un problema di ottimizzazione combinatoria difficile che assume la forma di un problema QUBO (Quadratic Unconstrained Binary Optimization) o, equivalentemente, un'ottimizzazione del modello di Ising. Questo tipo di problema di ottimizzazione è NP-completo e quindi difficile da affrontare con metodi classici di calcolo digitale e algoritmi come il Simulated Annealing (SA). Pertanto, metodi computazionali alternativi, come quelli sviluppati nel contesto della computazione quantistica, sono di grande interesse per questa classe di problemi. In particolare, l’Adiabatic Quantum Annealing (AQA) è stato recentemente utilizzato per diverse dimostrazioni dell’efficacia di questo tipo di computazione in diversi ambiti quali la rilevazione di particelle, l’analisi di immagini aeree e alcune applicazioni finanziarie. La sua implementazione su processori ad atomi neutri, un tipo di adiabatic quantum hardware, ha fornito risultati promettenti in termini di utilità pratica e scalabilità. Questa tesi mira a sviluppare, testare e valutare un algoritmo basato su Qboost nel contesto dei problemi di classificazione multi-label. Lo studio e l'implementazione prendono in considerazione diversi algoritmi di ottimizzazione quantum-hybrid, quantum-inspired e tradizionali, nonché diverse soluzioni hardware, inclusi computer quantistici con processori ad atomi neutri. Il progetto si è sviluppato durante un'esperienza di stage presso Axyon AI, un'azienda FinTech che supporta gli asset manager attraverso la sua piattaforma software di machine learning. Axyon AI sfrutta l'ensemble learning e il boosting nella sua pipeline di machine learning. Lo scopo di questo progetto è fornire una dimostrazione di fattibilità riguardo il miglioramento delle prestazioni della fase di costruzione dell'ensemble rispetto all'algoritmo di EL attualmente impiegato dall’azienda. Le tecniche proposte agevolano una più ampia esplorazione dello spazio di configurazione dei weak learners, mirando a massimizzare le prestazioni e a cogliere eventuale potenziale precedentemente inesplorato.Ensemble Learning (EL) is a machine learning technique that involves combining multiple individual models, called weak learners, in order to produce more accurate predictions. The idea behind EL is that by aggregating the predictions of multiple models, the final prediction can be more robust, accurate, and generalizable than that of any of the single weak learners alone. Boosting is a powerful EL method in which the ensemble of models is constructed iteratively, so that at each iteration the training of new learners focuses on the training examples for which the previously selected models perform poorly. Boosting algorithms have been successfully applied to various domains, including image and object recognition, text mining, finance and a number of other fields. They are particularly effective in scenarios where high accuracy and stability are crucial, making them a valuable tool in the field of machine learning. Qboost is a boosting algorithm first introduced by Neven et al. in 2008 that casts the problem of EL into a hard combinatorial optimization problem that takes the form of a QUBO (Quadratic Unconstrained Binary Optimization) problem or, equivalently, an Ising model optimization. This kind of optimization problem is NP-complete and therefore difficult to tackle with classical digital computing methods and algorithms like simulated annealing (SA). Hence, alternative computational methods like the ones developed within the framework of quantum computing are of high interest for this class of problems. In particular, adiabatic quantum annealing (AQA) has been recently used for multiple demonstrations in the fields of particle detection, aerial imaging and financial applications. Its implementation on neutral atom processors, a type of adiabatic quantum hardware, has yielded promising results in terms of practical usefulness and scalability. This thesis aims to develop, test and benchmark a Qboost-based algorithm in the context of multilabel classification problems. The study and the implementation take into account several quantum-hybrid, quantum-inspired and traditional optimization algorithms as well as different hardware solutions, including quantum computers with neutral atom processors. The project matured during an internship experience at Axyon AI, a FinTech company that serves quantitative asset managers through its proprietary machine learning software platform. Axyon AI exploits ensemble learning and boosting in its machine learning pipeline. The scope of this project is to build a proof of concept for the improvement of the performance of the ensemble building step in the pipeline with respect to the currently employed EL algorithm. The proposed techniques facilitate a broader exploration of the configuration space of the weak learners, aiming to maximise performance and capture untapped potential
    • …
    corecore