445 research outputs found

    Advances in machine learning algorithms for financial risk management

    Get PDF
    In this thesis, three novel machine learning techniques are introduced to address distinct yet interrelated challenges involved in financial risk management tasks. These approaches collectively offer a comprehensive strategy, beginning with the precise classification of credit risks, advancing through the nuanced forecasting of financial asset volatility, and ending with the strategic optimisation of financial asset portfolios. Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression model is then applied to predict the probability of default using the heuristically balanced datasets. The results underscore the effectiveness of our proposed technique, with superior performance observed in comparison to other imbalanced preprocessing approaches. This advancement in credit risk classification lays a solid foundation for understanding individual financial behaviours, a crucial first step in the broader context of financial risk management. Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a Triple Discriminator Generative Adversarial Network with a continuous wavelet transform is proposed. The proposed model has the ability to decompose volatility time series into signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a Generative Adversarial Network consisting of triple Discriminator and Generator networks. The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised loss and reconstruction loss as part of its framework. Data from nine financial assets are employed to demonstrate the effectiveness of the proposed model. This approach not only enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis. Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio optimisation using historical Low, High, and Close prices of assets as input with weights of assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return on investment based on deep reinforcement learning. To provide more learning stability in an online training process, a Markov Differential Sharpe Ratio reward function has been proposed as the reinforcement learning objective function. Additionally, a Multi-Memory Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout a specified trading period. The use of the insights gained from volatility forecasting into this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving superior results based on risk-adjusted reward performance measures. In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the accuracy of credit risk classification, through the improvement and understanding of market volatility, to optimisation of investment strategies. These methodologies collectively show the potential of the use of machine learning to improve financial risk management

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well

    The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: a critical review

    Get PDF
    With the predicted depletion of natural resources and alarming environmental issues, sustainable development has become a popular as well as a much-needed concept in modern process industries. Hence, manufacturers are quite keen on adopting novel process monitoring techniques to enhance product quality and process efficiency while minimizing possible adverse environmental impacts. Hardware sensors are employed in process industries to aid process monitoring and control, but they are associated with many limitations such as disturbances to the process flow, measurement delays, frequent need for maintenance, and high capital costs. As a result, soft sensors have become an attractive alternative for predicting quality-related parameters that are ā€˜hard-to-measureā€™ using hardware sensors. Due to their promising features over hardware counterparts, they have been employed across different process industries. This article attempts to explore the state-of-the-art artificial intelligence (Al)-driven soft sensors designed for process industries and their role in achieving the goal of sustainable development. First, a general introduction is given to soft sensors, their applications in different process industries, and their significance in achieving sustainable development goals. AI-based soft sensing algorithms are then introduced. Next, a discussion on how AI-driven soft sensors contribute toward different sustainable manufacturing strategies of process industries is provided. This is followed by a critical review of the most recent state-of-the-art AI-based soft sensors reported in the literature. Here, the use of powerful AI-based algorithms for addressing the limitations of traditional algorithms, that restrict the soft sensor performance is discussed. Finally, the challenges and limitations associated with the current soft sensor design, application, and maintenance aspects are discussed with possible future directions for designing more intelligent and smart soft sensing technologies to cater the future industrial needs

    Sensing Collectives: Aesthetic and Political Practices Intertwined

    Get PDF
    Are aesthetics and politics really two different things? The book takes a new look at how they intertwine, by turning from theory to practice. Case studies trace how sensory experiences are created and how collective interests are shaped. They investigate how aesthetics and politics are entangled, both in building and disrupting collective orders, in governance and innovation. This ranges from populist rallies and artistic activism over alternative lifestyles and consumer culture to corporate PR and governmental policies. Authors are academics and artists. The result is a new mapping of the intermingling and co-constitution of aesthetics and politics in engagements with collective orders

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    Harnessing Evolution in-Materio as an Unconventional Computing Resource

    Get PDF
    This thesis illustrates the use and development of physical conductive analogue systems for unconventional computing using the Evolution in-Materio (EiM) paradigm. EiM uses an Evolutionary Algorithm to configure and exploit a physical material (or medium) for computation. While EiM processors show promise, fundamental questions and scaling issues remain. Additionally, their development is hindered by slow manufacturing and physical experimentation. This work addressed these issues by implementing simulated models to speed up research efforts, followed by investigations of physically implemented novel in-materio devices. Initial work leveraged simulated conductive networks as single substrate ā€˜monolithicā€™ EiM processors, performing classification by formulating the system as an optimisation problem, solved using Differential Evolution. Different material properties and algorithm parameters were isolated and investigated; which explained the capabilities of configurable parameters and showed ideal nanomaterial choice depended upon problem complexity. Subsequently, drawing from concepts in the wider Machine Learning field, several enhancements to monolithic EiM processors were proposed and investigated. These ensured more efficient use of training data, better classification decision boundary placement, an independently optimised readout layer, and a smoother search space. Finally, scalability and performance issues were addressed by constructing in-Materio Neural Networks (iM-NNs), where several EiM processors were stacked in parallel and operated as physical realisations of Hidden Layer neurons. Greater flexibility in system implementation was achieved by re-using a single physical substrate recursively as several virtual neurons, but this sacrificed faster parallelised execution. These novel iM-NNs were first implemented using Simulated in-Materio neurons, and trained for classification as Extreme Learning Machines, which were found to outperform artificial networks of a similar size. Physical iM-NN were then implemented using a Raspberry Pi, custom Hardware Interface and Lambda Diode based Physical in-Materio neurons, which were trained successfully with neuroevolution. A more complex AutoEncoder structure was then proposed and implemented physically to perform dimensionality reduction on a handwritten digits dataset, outperforming both Principal Component Analysis and artificial AutoEncoders. This work presents an approach to exploit systems with interesting physical dynamics, and leverage them as a computational resource. Such systems could become low power, high speed, unconventional computing assets in the future

    GAC-MAC-SGA 2023 Sudbury Meeting: Abstracts, Volume 46

    Get PDF

    Bio-inspired optimization in integrated river basin management

    Get PDF
    Water resources worldwide are facing severe challenges in terms of quality and quantity. It is essential to conserve, manage, and optimize water resources and their quality through integrated water resources management (IWRM). IWRM is an interdisciplinary field that works on multiple levels to maximize the socio-economic and ecological benefits of water resources. Since this is directly influenced by the riverā€™s ecological health, the point of interest should start at the basin-level. The main objective of this study is to evaluate the application of bio-inspired optimization techniques in integrated river basin management (IRBM). This study demonstrates the application of versatile, flexible and yet simple metaheuristic bio-inspired algorithms in IRBM. In a novel approach, bio-inspired optimization algorithms Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) are used to spatially distribute mitigation measures within a basin to reduce long-term annual mean total nitrogen (TN) concentration at the outlet of the basin. The Upper Fuhse river basin developed in the hydrological model, Hydrological Predictions for the Environment (HYPE), is used as a case study. ACO and PSO are coupled with the HYPE model to distribute a set of measures and compute the resulting TN reduction. The algorithms spatially distribute nine crop and subbasin-level mitigation measures under four categories. Both algorithms can successfully yield a discrete combination of measures to reduce long-term annual mean TN concentration. They achieved an 18.65% reduction, and their performance was on par with each other. This study has established the applicability of these bio-inspired optimization algorithms in successfully distributing the TN mitigation measures within the river basin. Stakeholder involvement is a crucial aspect of IRBM. It ensures that researchers and policymakers are aware of the ground reality through large amounts of information collected from the stakeholder. Including stakeholders in policy planning and decision-making legitimizes the decisions and eases their implementation. Therefore, a socio-hydrological framework is developed and tested in the Larqui river basin, Chile, based on a field survey to explore the conditions under which the farmers would implement or extend the width of vegetative filter strips (VFS) to prevent soil erosion. The framework consists of a behavioral, social model (extended Theory of Planned Behavior, TPB) and an agent-based model (developed in NetLogo) coupled with the results from the vegetative filter model (Vegetative Filter Strip Modeling System, VFSMOD-W). The results showed that the ABM corroborates with the survey results and the farmers are willing to extend the width of VFS as long as their utility stays positive. This framework can be used to develop tailor-made policies for river basins based on the conditions of the river basins and the stakeholders' requirements to motivate them to adopt sustainable practices. It is vital to assess whether the proposed management plans achieve the expected results for the river basin and if the stakeholders will accept and implement them. The assessment via simulation tools ensures effective implementation and realization of the target stipulated by the decision-makers. In this regard, this dissertation introduces the application of bio-inspired optimization techniques in the field of IRBM. The successful discrete combinatorial optimization in terms of the spatial distribution of mitigation measures by ACO and PSO and the novel socio-hydrological framework using ABM prove the forte and diverse applicability of bio-inspired optimization algorithms

    Achieving High Renewable Energy Integration in Smart Grids with Machine Learning

    Get PDF
    The integration of high levels of renewable energy into smart grids is crucial for achieving a sustainable and efficient energy infrastructure. However, this integration presents significant technical and operational challenges due to the intermittent nature and inherent uncertainty of renewable energy sources (RES). Therefore, the energy storage system (ESS) has always been bound to renewable energy, and its charge and discharge control has become an important part of the integration. The addition of RES and ESS comes with their complex control, communication, and monitor capabilities, which also makes the grid more vulnerable to attacks, brings new challenges to the cybersecurity. A large number of works have been devoted to the optimization integration of the RES and ESS system to the traditional grid, along with combining the ESS scheduling control with the traditional Optimal Power Flow (OPF) control. Cybersecurity problem focusing on the RES integrated grid has also gradually aroused researchersā€™ interest. In recent years, machine learning techniques have emerged in different research field including optimizing renewable energy integration in smart grids. Reinforcement learning (RL), which trains agent to interact with the environment by making sequential decisions to maximize the expected future reward, is used as an optimization tool. This dissertation explores the application of RL algorithms and models to achieve high renewable energy integration in smart grids. The research questions focus on the effectiveness, benefits of renewable energy integration to individual consumers and electricity utilities, applying machine learning techniques in optimizing the behaviors of the ESS and the generators and other components in the grid. The objectives of this research are to investigate the current algorithms of renewable energy integration in smart grids, explore RL algorithms, develop novel RL-based models and algorithms for optimization control and cybersecurity, evaluate their performance through simulations on real-world data set, and provide practical recommendations for implementation. The research approach includes a comprehensive literature review to understand the challenges and opportunities associated with renewable energy integration. Various optimization algorithms, such as linear programming (LP), dynamic programming (DP) and various RL algorithms, such as Deep Q-Learning (DQN) and Deep Deterministic Policy Gradient (DDPG), are applied to solve problems during renewable energy integration in smart grids. Simulation studies on real-world data, including different types of loads, solar and wind energy profiles, are used to evaluate the performance and effectiveness of the proposed machine learning techniques. The results provide insights into the capabilities and limitations of machine learning in solving the optimization problems in the power system. Compared with traditional optimization tools, the RL approach has the advantage of real-time implementation, with the cost being the training time and unguaranteed model performance. Recommendations and guidelines for practical implementation of RL algorithms on power systems are provided in the appendix
    • ā€¦
    corecore