7,051 research outputs found

    Advances in machine learning algorithms for financial risk management

    Get PDF
    In this thesis, three novel machine learning techniques are introduced to address distinct yet interrelated challenges involved in financial risk management tasks. These approaches collectively offer a comprehensive strategy, beginning with the precise classification of credit risks, advancing through the nuanced forecasting of financial asset volatility, and ending with the strategic optimisation of financial asset portfolios. Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression model is then applied to predict the probability of default using the heuristically balanced datasets. The results underscore the effectiveness of our proposed technique, with superior performance observed in comparison to other imbalanced preprocessing approaches. This advancement in credit risk classification lays a solid foundation for understanding individual financial behaviours, a crucial first step in the broader context of financial risk management. Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a Triple Discriminator Generative Adversarial Network with a continuous wavelet transform is proposed. The proposed model has the ability to decompose volatility time series into signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a Generative Adversarial Network consisting of triple Discriminator and Generator networks. The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised loss and reconstruction loss as part of its framework. Data from nine financial assets are employed to demonstrate the effectiveness of the proposed model. This approach not only enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis. Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio optimisation using historical Low, High, and Close prices of assets as input with weights of assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return on investment based on deep reinforcement learning. To provide more learning stability in an online training process, a Markov Differential Sharpe Ratio reward function has been proposed as the reinforcement learning objective function. Additionally, a Multi-Memory Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout a specified trading period. The use of the insights gained from volatility forecasting into this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving superior results based on risk-adjusted reward performance measures. In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the accuracy of credit risk classification, through the improvement and understanding of market volatility, to optimisation of investment strategies. These methodologies collectively show the potential of the use of machine learning to improve financial risk management

    A Trust Management Framework for Vehicular Ad Hoc Networks

    Get PDF
    The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers

    Structuring the State’s Voice of Contention in Harmonious Society: How Party Newspapers Cover Social Protests in China

    Get PDF
    During the Chinese Communist Party’s (CCP) campaign of building a ‘harmonious society’, how do the official newspapers cover the instances of social contention on the ground? Answering this question will shed light not only on how the party press works but also on how the state and the society interact in today’s China. This thesis conceptualises this phenomenon with a multi-faceted and multi-levelled notion of ‘state-initiated contentious public sphere’ to capture the complexity of mediated relations between the state and social contention in the party press. Adopting a relational approach, this thesis analyses 1758 news reports of ‘mass incident’ in the People’s Daily and the Guangming Daily between 2004 and 2020, employing cluster analysis, qualitative comparative analysis, and social network analysis. The thesis finds significant differences in the patterns of contentious coverage in the party press at the level of event and province and an uneven distribution of attention to social contention across incidents and regions. For ‘reported regions’, the thesis distinguishes four types of coverage and presents how party press responds differently to social contention in different scenarios at the provincial level. For ‘identified incidents’, the thesis distinguishes a cumulative type of visibility based on the quantity of coverage from a relational visibility based on the structure emerging from coverage and explains how different news-making rationales determine whether instances receive similar amounts of coverage or occupy similar positions within coverage. Eventually, by demonstrating how the Chinese state strategically uses party press to respond to social contention and how social contention is journalistically placed in different positions in the state’s eyes, this thesis argues that what social contention leads to is the establishment of complex state-contention relations channelled through the party press

    Pathways of development of dynamic capabilities for servitization transformation: a longitudinal multi-case study

    Get PDF
    Servitization is a transformation process requiring manufacturers to develop dynamic capabilities to support the change process and overcome emerging challenges over time. In this paper, we study pathways of development of dynamic capabilities for servitization transformation (the sequence of the development of capabilities and how they work together over time) and how they relate to servitization transformation outcomes. We do so based on six longitudinal in-depth case studies of manufacturing firms which, having departed from similar servitization maturity starting points, followed different capability development pathways in their transformation processes and achieved different outcomes. We found that successful pathways of development of capabilities for servitization transformation are associated with (1) developing (first-order) dynamic service provision capabilities sequentially, following a specific order over time and (2) developing (second-order) dynamic reconfiguring capabilities to overcome challenges and sustain the development of service provision capabilities and the transformation process. Our study contributes to the literature by providing an in-depth understanding of how the pathways of development of dynamic capabilities over time influence the outcomes of the servitization transformation process. It is one of the first studies to unveil in detail mechanisms by which different reconfiguring and service provision capabilities work together over time to facilitate the servitization transformation process.info:eu-repo/semantics/publishedVersio

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    INTEGRATED COMPUTER-AIDED DESIGN, EXPERIMENTATION, AND OPTIMIZATION APPROACH FOR PEROVSKITES AND PETROLEUM PACKAGING PROCESSES

    Get PDF
    According to the World Economic Forum report, the U.S. currently has an energy efficiency of just 30%, thus illustrating the potential scope and need for efficiency enhancement and waste minimization. In the U.S. energy sector, petroleum and solar energy are the two key pillars that have the potential to create research opportunities for transition to a cleaner, greener, and sustainable future. In this research endeavor, the focus is on two pivotal areas: (i) Computer-aided perovskite solar cell synthesis; and (ii) Optimization of flow processes through multiproduct petroleum pipelines. In the area of perovskite synthesis, the emphasis is on the enhancement of structural stability, lower costs, and sustainability. Utilizing modeling and optimization methods for computer-aided molecular design (CAMD), efficient, sustainable, less toxic, and economically viable alternatives to conventional lead-based perovskites are obtained. In the second area of optimization of flow processes through multiproduct petroleum pipelines, an actual industrial-scale operation for packaging multiple lube-oil blends is studied. Through an integrated approach of experimental characterization, process design, procedural improvements, testing protocols, control mechanisms, mathematical modeling, and optimization, the limitations of traditional packaging operations are identified, and innovative operational paradigms and strategies are developed by incorporating methods from process systems engineering and data-driven approaches

    Fairness-aware Machine Learning in Educational Data Mining

    Get PDF
    Fairness is an essential requirement of every educational system, which is reflected in a variety of educational activities. With the extensive use of Artificial Intelligence (AI) and Machine Learning (ML) techniques in education, researchers and educators can analyze educational (big) data and propose new (technical) methods in order to support teachers, students, or administrators of (online) learning systems in the organization of teaching and learning. Educational data mining (EDM) is the result of the application and development of data mining (DM), and ML techniques to deal with educational problems, such as student performance prediction and student grouping. However, ML-based decisions in education can be based on protected attributes, such as race or gender, leading to discrimination of individual students or subgroups of students. Therefore, ensuring fairness in ML models also contributes to equity in educational systems. On the other hand, bias can also appear in the data obtained from learning environments. Hence, bias-aware exploratory educational data analysis is important to support unbiased decision-making in EDM. In this thesis, we address the aforementioned issues and propose methods that mitigate discriminatory outcomes of ML algorithms in EDM tasks. Specifically, we make the following contributions: We perform bias-aware exploratory analysis of educational datasets using Bayesian networks to identify the relationships among attributes in order to understand bias in the datasets. We focus the exploratory data analysis on features having a direct or indirect relationship with the protected attributes w.r.t. prediction outcomes. We perform a comprehensive evaluation of the sufficiency of various group fairness measures in predictive models for student performance prediction problems. A variety of experiments on various educational datasets with different fairness measures are performed to provide users with a broad view of unfairness from diverse aspects. We deal with the student grouping problem in collaborative learning. We introduce the fair-capacitated clustering problem that takes into account cluster fairness and cluster cardinalities. We propose two approaches, namely hierarchical clustering and partitioning-based clustering, to obtain fair-capacitated clustering. We introduce the multi-fair capacitated (MFC) students-topics grouping problem that satisfies students' preferences while ensuring balanced group cardinalities and maximizing the diversity of members regarding the protected attribute. We propose three approaches: a greedy heuristic approach, a knapsack-based approach using vanilla maximal 0-1 knapsack formulation, and an MFC knapsack approach based on group fairness knapsack formulation. In short, the findings described in this thesis demonstrate the importance of fairness-aware ML in educational settings. We show that bias-aware data analysis, fairness measures, and fairness-aware ML models are essential aspects to ensure fairness in EDM and the educational environment.Ministry of Science and Culture of Lower Saxony/LernMINT/51410078/E

    Combined Nutrition and Exercise Interventions in Community Groups

    Get PDF
    Diet and physical activity are two key modifiable lifestyle factors that influence health across the lifespan (prevention and management of chronic diseases and reduction of the risk of premature death through several biological mechanisms). Community-based interventions contribute to public health, as they have the potential to reach high population-level impact, through the focus on groups that share a common culture or identity in their natural living environment. While the health benefits of a balanced diet and regular physical activity are commonly studied separately, interventions that combine these two lifestyle factors have the potential to induce greater benefits in community groups rather than strategies focusing only on one or the other. Thus, this Special Issue entitled “Combined Nutrition and Exercise Interventions in Community Groups” is comprised of manuscripts that highlight this combined approach (balanced diet and regular physical activity) in community settings. The contributors to this Special Issue are well-recognized professionals in complementary fields such as education, public health, nutrition, and exercise. This Special Issue highlights the latest research regarding combined nutrition and exercise interventions among different community groups and includes research articles developed through five continents (Africa, Asia, America, Europe and Oceania), as well as reviews and systematic reviews

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Ecological and confined domain ontology construction scheme using concept clustering for knowledge management

    Get PDF
    Knowledge management in a structured system is a complicated task that requires common, standardized methods that are acceptable to all actors in a system. Ontology, in this regard, is a primary element and plays a central role in knowledge management, interoperability between various departments, and better decision making. The ontology construction for structured systems comprises logical and structural complications. Researchers have already proposed a variety of domain ontology construction schemes. However, these schemes do not involve some important phases of ontology construction that make ontologies more collaborative. Furthermore, these schemes do not provide details of the activities and methods involved in the construction of an ontology, which may cause difficulty in implementing the ontology. The major objectives of this research were to provide a comparison between some existing ontology construction schemes and to propose an enhanced ecological and confined domain ontology construction (EC-DOC) scheme for structured knowledge management. The proposed scheme introduces five important phases to construct an ontology, with a major focus on the conceptualizing and clustering of domain concepts. In the conceptualization phase, a glossary of domain-related concepts and their properties is maintained, and a Fuzzy C-Mean soft clustering mechanism is used to form the clusters of these concepts. In addition, the localization of concepts is instantly performed after the conceptualization phase, and a translation file of localized concepts is created. The EC-DOC scheme can provide accurate concepts regarding the terms for a specific domain, and these concepts can be made available in a preferred local language
    corecore