5 research outputs found

    Managing extreme cryptocurrency volatility in algorithmic trading: EGARCH via genetic algorithms and neural networks.

    Get PDF
    Política de acceso abierto tomada de: https://www.aimspress.com/index/news/solo-detail/openaccesspolicyThe blockchain ecosystem has seen a huge growth since 2009, with the introduction of Bitcoin, driven by conceptual and algorithmic innovations, along with the emergence of numerous new cryptocurrencies. While significant attention has been devoted to established cryptocurrencies like Bitcoin and Ethereum, the continuous introduction of new tokens requires a nuanced examination. In this article, we contribute a comparative analysis encompassing deep learning and quantum methods within neural networks and genetic algorithms, incorporating the innovative integration of EGARCH (Exponential Generalized Autoregressive Conditional Heteroscedasticity) into these methodologies. In this study, we evaluated how well Neural Networks and Genetic Algorithms predict “buy” or “sell” decisions for different cryptocurrencies, using F1 score, Precision, and Recall as key metrics. Our findings underscored the Adaptive Genetic Algorithm with Fuzzy Logic as the most accurate and precise within genetic algorithms. Furthermore, neural network methods, particularly the Quantum Neural Network, demonstrated noteworthy accuracy. Importantly, the X2Y2 cryptocurrency consistently attained the highest accuracy levels in both methodologies, emphasizing its predictive strength. Beyond aiding in the selection of optimal trading methodologies, we introduced the potential of EGARCH integration to enhance predictive capabilities, offering valuable insights for reducing risks associated with investing in nascent cryptocurrencies amidst limited historical market data. This research provides insights for investors, regulators, and developers in the cryptocurrency market. Investors can utilize accurate predictions to optimize investment decisions, regulators may consider implementing guidelines to ensure fairness, and developers play a pivotal role in refining neural network models for enhanced analysis.This research was funded by the Universitat de Barcelona, under the grant UB-AE-AS017634

    Adversarial Machine Learning in the Wild

    Get PDF
    Deep neural networks are making their way into our everyday lives at an increasing rate. While the adoption of these models has greatly improved our everyday lives, it has also opened the door to new vulnerabilities in real-world systems. More specifically, in the scope of this work we are interested in one class of vulnerabilities: adversarial attacks. Given the high importance and the sensitivity of some of the tasks these models are responsible for, it is crucial to study such vulnerabilities in real-world systems. In this work, we look at examples of deep neural network-based real-world systems, vulnerabilities of such systems, and approaches for making such systems more robust. First, we study an example of leveraging a deep neural network in a business-critical real-world system. We discuss how deep neural networks improve the quality of smart voice assistants. More specifically, we introduce how collaborative filtering models can automatically detect and resolve the errors of a voice assistant. We then discuss the success of this approach in improving the quality of a real-world voice assistant. Second, we demonstrate a proof of concept for an adversarial attack against content-based recommendation systems which are commonly used in real-world settings. We discuss how malicious actors can add unnoticeable perturbations to the content they upload to the website to achieve their preferred outcomes. We also show how adversarial training can render such attacks useless. Third, we discuss another example of how adversarial attacks can be leveraged to manipulate a real-world system. We study how adversarial attacks can successfully manipulate YouTube's copyright detection model and the financial implications of this vulnerability. In particular, we show how adversarial examples created for a copyright detection model that we implemented transfer to another black-box model. Finally, we study the problem of transfer learning in an adversarially robust setting. We discuss how robust models contain robust feature extractors and how we can leverage them to train new classifiers that preserve the robustness of the original model. We then study the case of fine-tuning in the target domain while preserving the robustness. We show the success of our proposed solutions in preserving the robustness in the target domain

    Adversarial Robustness and Robust Meta-Learning for Neural Networks

    Get PDF
    Despite the overwhelming success of neural networks for pattern recognition, these models behave categorically different from humans. Adversarial examples, small perturbations which are often undetectable to the human eye, easily fool neural networks, demonstrating that neural networks lack the robustness of human classifiers. This thesis comprises a sequence of three parts. First, we motivate the study of defense against adversarial examples with a case study on algorithmic trading in which robustness may be critical for security reasons. Second, we develop methods for hardening neural networks against an adversary, especially in the low-data regime, where meta-learning methods achieve state-of-the-art results. Finally, we discuss several properties of the neural network models we use. These properties are of interest beyond robustness to adversarial examples, and they extend to the broad setting of deep learning
    corecore