2,917 research outputs found

    Advances in machine learning algorithms for financial risk management

    Get PDF
    In this thesis, three novel machine learning techniques are introduced to address distinct yet interrelated challenges involved in financial risk management tasks. These approaches collectively offer a comprehensive strategy, beginning with the precise classification of credit risks, advancing through the nuanced forecasting of financial asset volatility, and ending with the strategic optimisation of financial asset portfolios. Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression model is then applied to predict the probability of default using the heuristically balanced datasets. The results underscore the effectiveness of our proposed technique, with superior performance observed in comparison to other imbalanced preprocessing approaches. This advancement in credit risk classification lays a solid foundation for understanding individual financial behaviours, a crucial first step in the broader context of financial risk management. Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a Triple Discriminator Generative Adversarial Network with a continuous wavelet transform is proposed. The proposed model has the ability to decompose volatility time series into signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a Generative Adversarial Network consisting of triple Discriminator and Generator networks. The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised loss and reconstruction loss as part of its framework. Data from nine financial assets are employed to demonstrate the effectiveness of the proposed model. This approach not only enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis. Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio optimisation using historical Low, High, and Close prices of assets as input with weights of assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return on investment based on deep reinforcement learning. To provide more learning stability in an online training process, a Markov Differential Sharpe Ratio reward function has been proposed as the reinforcement learning objective function. Additionally, a Multi-Memory Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout a specified trading period. The use of the insights gained from volatility forecasting into this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving superior results based on risk-adjusted reward performance measures. In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the accuracy of credit risk classification, through the improvement and understanding of market volatility, to optimisation of investment strategies. These methodologies collectively show the potential of the use of machine learning to improve financial risk management

    Unleashing the power of artificial intelligence for climate action in industrial markets

    Get PDF
    Artificial Intelligence (AI) is a game-changing capability in industrial markets that can accelerate humanity's race against climate change. Positioned in a resource-hungry and pollution-intensive industry, this study explores AI-powered climate service innovation capabilities and their overall effects. The study develops and validates an AI model, identifying three primary dimensions and nine subdimensions. Based on a dataset in the fast fashion industry, the findings show that the AI-powered climate service innovation capabilities significantly influence both environmental and market performance, in which environmental performance acts as a partial mediator. Specifically, the results identify the key elements of an AI-informed framework for climate action and show how this can be used to develop a range of mitigation, adaptation and resilience initiatives in response to climate change

    Distributed Ledger Technology (DLT) Applications in Payment, Clearing, and Settlement Systems:A Study of Blockchain-Based Payment Barriers and Potential Solutions, and DLT Application in Central Bank Payment System Functions

    Get PDF
    Payment, clearing, and settlement systems are essential components of the financial markets and exert considerable influence on the overall economy. While there have been considerable technological advancements in payment systems, the conventional systems still depend on centralized architecture, with inherent limitations and risks. The emergence of Distributed ledger technology (DLT) is being regarded as a potential solution to transform payment and settlement processes and address certain challenges posed by the centralized architecture of traditional payment systems (Bank for International Settlements, 2017). While proof-of-concept projects have demonstrated the technical feasibility of DLT, significant barriers still hinder its adoption and implementation. The overarching objective of this thesis is to contribute to the developing area of DLT application in payment, clearing and settlement systems, which is still in its initial stages of applications development and lacks a substantial body of scholarly literature and empirical research. This is achieved by identifying the socio-technical barriers to adoption and diffusion of blockchain-based payment systems and the solutions proposed to address them. Furthermore, the thesis examines and classifies various applications of DLT in central bank payment system functions, offering valuable insights into the motivations, DLT platforms used, and consensus algorithms for applicable use cases. To achieve these objectives, the methodology employed involved a systematic literature review (SLR) of academic literature on blockchain-based payment systems. Furthermore, we utilized a thematic analysis approach to examine data collected from various sources regarding the use of DLT applications in central bank payment system functions, such as central bank white papers, industry reports, and policy documents. The study's findings on blockchain-based payment systems barriers and proposed solutions; challenge the prevailing emphasis on technological and regulatory barriers in the literature and industry discourse regarding the adoption and implementation of blockchain-based payment systems. It highlights the importance of considering the broader socio-technical context and identifying barriers across all five dimensions of the social technical framework, including technological, infrastructural, user practices/market, regulatory, and cultural dimensions. Furthermore, the research identified seven DLT applications in central bank payment system functions. These are grouped into three overarching themes: central banks' operational responsibilities in payment and settlement systems, issuance of central bank digital money, and regulatory oversight/supervisory functions, along with other ancillary functions. Each of these applications has unique motivations or value proposition, which is the underlying reason for utilizing in that particular use case

    Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging

    Get PDF
    Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79–91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings

    Advancing aviation safety through machine learning and psychophysiological data: a systematic review

    Get PDF
    In the aviation industry, safety remains vital, often compromised by pilot errors attributed to factors such as workload, fatigue, stress, and emotional disturbances. To address these challenges, recent research has increasingly leveraged psychophysiological data and machine learning techniques, offering the potential to enhance safety by understanding pilot behavior. This systematic literature review rigorously follows a widely accepted methodology, scrutinizing 80 peer-reviewed studies out of 3352 studies from five key electronic databases. The paper focuses on behavioral aspects, data types, preprocessing techniques, machine learning models, and performance metrics used in existing studies. It reveals that the majority of research disproportionately concentrates on workload and fatigue, leaving behavioral aspects like emotional responses and attention dynamics less explored. Machine learning models such as tree-based and support vector machines are most commonly employed, but the utilization of advanced techniques like deep learning remains limited. Traditional preprocessing techniques dominate the landscape, urging the need for advanced methods. Data imbalance and its impact on model performance is identified as a critical, under-researched area. The review uncovers significant methodological gaps, including the unexplored influence of preprocessing on model efficacy, lack of diversification in data collection environments, and limited focus on model explainability. The paper concludes by advocating for targeted future research to address these gaps, thereby promoting both methodological innovation and a more comprehensive understanding of pilot behavior

    On the Control of Microgrids Against Cyber-Attacks: A Review of Methods and Applications

    Get PDF
    Nowadays, the use of renewable generations, energy storage systems (ESSs) and microgrids (MGs) has been developed due to better controllability of distributed energy resources (DERs) as well as their cost-effective and emission-aware operation. The development of MGs as well as the use of hierarchical control has led to data transmission in the communication platform. As a result, the expansion of communication infrastructure has made MGs as cyber-physical systems (CPSs) vulnerable to cyber-attacks (CAs). Accordingly, prevention, detection and isolation of CAs during proper control of MGs is essential. In this paper, a comprehensive review on the control strategies of microgrids against CAs and its defense mechanisms has been done. The general structure of the paper is as follows: firstly, MGs operational conditions, i.e., the secure or insecure mode of the physical and cyber layers are investigated and the appropriate control to return to a safer mode are presented. Then, the common MGs communication system is described which is generally used for multi-agent systems (MASs). Also, classification of CAs in MGs has been reviewed. Afterwards, a comprehensive survey of available researches in the field of prevention, detection and isolation of CA and MG control against CA are summarized. Finally, future trends in this context are clarified

    SimFLE: Simple Facial Landmark Encoding for Self-Supervised Facial Expression Recognition in the Wild

    Full text link
    One of the key issues in facial expression recognition in the wild (FER-W) is that curating large-scale labeled facial images is challenging due to the inherent complexity and ambiguity of facial images. Therefore, in this paper, we propose a self-supervised simple facial landmark encoding (SimFLE) method that can learn effective encoding of facial landmarks, which are important features for improving the performance of FER-W, without expensive labels. Specifically, we introduce novel FaceMAE module for this purpose. FaceMAE reconstructs masked facial images with elaborately designed semantic masking. Unlike previous random masking, semantic masking is conducted based on channel information processed in the backbone, so rich semantics of channels can be explored. Additionally, the semantic masking process is fully trainable, enabling FaceMAE to guide the backbone to learn spatial details and contextual properties of fine-grained facial landmarks. Experimental results on several FER-W benchmarks prove that the proposed SimFLE is superior in facial landmark localization and noticeably improved performance compared to the supervised baseline and other self-supervised methods

    Using Machine Learning in Forestry

    Get PDF
    Advanced technology has increased demands and needs for innovative approaches to apply traditional methods more economically, effectively, fast and easily in forestry, as in other disciplines. Especially recently emerging terms such as forestry informatics, precision forestry, smart forestry, Forestry 4.0, climate-intelligent forestry, digital forestry and forestry big data have started to take place on the agenda of the forestry discipline. As a result, significant increases are observed in the number of academic studies in which modern approaches such as machine learning and recently emerged automatic machine learning (AutoML) are integrated into decision-making processes in forestry. This study aims to increase further the comprehensibility of machine learning algorithms in the Turkish language, to make them widespread, and be considered a resource for researchers interested in their use in forestry. Thus, it was aimed to bring a review article to the national literature that reveals both how machine learning has been used in various forestry activities from the past to the present and its potential for use in the future

    Decoding Medical Dramas: Identifying Isotopies through Multimodal Classification

    Get PDF
    The rise in processing power, combined with advancements in machine learning, has resulted in an increase in the use of computational methods for automated content analysis. Although human coding is more effective for handling complex variables at the core of media studies, audiovisual content is often understudied because analyzing it is difficult and time-consuming. The present work sets out to address this issue by experimenting with unimodal and multimodal transformer-based models in an attempt to automatically classify segments from the popular medical TV drama Grey's Anatomy into three narrative categories, also referred to as isotopies. To approach the task, this study explores two different classification approaches: the first approach is to employ a single multiclass classifier, while the second involves using the one-vs-the-rest approach to decompose the multiclass task. We investigate both approaches in unimodal and multimodal settings, with the aim of identifying the most effective combination of the two. The results of the experiments can be considered to be promising, as the multiclass multimodal approach results in an F1 score of 0.723, a noticeable improvement over the F1 of 0.684 obtained by the best unimodal approach, a one-vs-the-rest model based on text. This provides support for the hypothesis that visual and textual modalities can complement each other and result in a better-performing model, which highlights the potential of multimodal approaches for narrative classification in the context of medical dramas

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table
    • …
    corecore