515 research outputs found

    Social Networks, Asset Allocation and Portfolio Diversification

    Get PDF
    In this thesis we consider the problem of choosing financial assets from the equity markets for portfolio construction purposes. We adapt various measures to model the dependence structure among financial assets, taking both the linear and the non-linear relationships into consideration. The dependence structure is reflected by the social networks. We apply the data clustering technique (Frey and Dueck, 2007) to the social networks and study the equity selections based on different dependence measures. The regime switching model (Perlin, 2014) is considered as well in order to identify the changes in the market phases. The performance of the equity selections is evaluated within the mean-variance framework. In addition, we present a diversification analysis of the equity selections with the methodology proposed by Meucci (2009). The numerical tests are applied on three major Chinese equity markets. Through changing the market environment, we acquire a good understanding of the influencing factors for choosing financial assets

    Affinity-Based Reinforcement Learning : A New Paradigm for Agent Interpretability

    Get PDF
    The steady increase in complexity of reinforcement learning (RL) algorithms is accompanied by a corresponding increase in opacity that obfuscates insights into their devised strategies. Methods in explainable artificial intelligence seek to mitigate this opacity by either creating transparent algorithms or extracting explanations post hoc. A third category exists that allows the developer to affect what agents learn: constrained RL has been used in safety-critical applications and prohibits agents from visiting certain states; preference-based RL agents have been used in robotics applications and learn state-action preferences instead of traditional reward functions. We propose a new affinity-based RL paradigm in which agents learn strategies that are partially decoupled from reward functions. Unlike entropy regularisation, we regularise the objective function with a distinct action distribution that represents a desired behaviour; we encourage the agent to act according to a prior while learning to maximise rewards. The result is an inherently interpretable agent that solves problems with an intrinsic affinity for certain actions. We demonstrate the utility of our method in a financial application: we learn continuous time-variant compositions of prototypical policies, each interpretable by its action affinities, that are globally interpretable according to customers’ financial personalities. Our method combines advantages from both constrained RL and preferencebased RL: it retains the reward function but generalises the policy to match a defined behaviour, thus avoiding problems such as reward shaping and hacking. Unlike Boolean task composition, our method is a fuzzy superposition of different prototypical strategies to arrive at a more complex, yet interpretable, strategy.publishedVersio

    Representation learning in finance

    Get PDF
    Finance studies often employ heterogeneous datasets from different sources with different structures and frequencies. Some data are noisy, sparse, and unbalanced with missing values; some are unstructured, containing text or networks. Traditional techniques often struggle to combine and effectively extract information from these datasets. This work explores representation learning as a proven machine learning technique in learning informative embedding from complex, noisy, and dynamic financial data. This dissertation proposes novel factorization algorithms and network modeling techniques to learn the local and global representation of data in two specific financial applications: analysts’ earnings forecasts and asset pricing. Financial analysts’ earnings forecast is one of the most critical inputs for security valuation and investment decisions. However, it is challenging to fully utilize this type of data due to the missing values. This work proposes one matrix-based algorithm, “Coupled Matrix Factorization,” and one tensor-based algorithm, “Nonlinear Tensor Coupling and Completion Framework,” to impute missing values in analysts’ earnings forecasts and then use the imputed data to predict firms’ future earnings. Experimental analysis shows that missing value imputation and representation learning by coupled matrix/tensor factorization from the observed entries improve the accuracy of firm earnings prediction. The results confirm that representing financial time-series in their natural third-order tensor form improves the latent representation of the data. It learns high-quality embedding by overcoming information loss of flattening data in spatial or temporal dimensions. Traditional asset pricing models focus on linear relationships among asset pricing factors and often ignore nonlinear interaction among firms and factors. This dissertation formulates novel methods to identify nonlinear asset pricing factors and develops asset pricing models that capture global and local properties of data. First, this work proposes an artificial neural network “auto enco der” based model to capture the latent asset pricing factors from the global representation of an equity index. It also shows that autoencoder effectively identifies communal and non-communal assets in an index to facilitate portfolio optimization. Second, the global representation is augmented by propagating information from local communities, where the network determines the strength of this information propagation. Based on the Laplacian spectrum of the equity market network, a network factor “Z-score” is proposed to facilitate pertinent information propagation and capture dynamic changes in network structures. Finally, a “Dynamic Graph Learning Framework for Asset Pricing” is proposed to combine both global and local representations of data into one end-to-end asset pricing model. Using graph attention mechanism and information diffusion function, the proposed model learns new connections for implicit networks and refines connections of explicit networks. Experimental analysis shows that the proposed model incorporates information from negative and positive connections, captures the network evolution of the equity market over time, and outperforms other state-of-the-art asset pricing and predictive machine learning models in stock return prediction. In a broader context, this is a pioneering work in FinTech, particularly in understanding complex financial market structures and developing explainable artificial intelligence models for finance applications. This work effectively demonstrates the application of machine learning to model financial networks, capture nonlinear interactions on data, and provide investors with powerful data-driven techniques for informed decision-making

    A FRAMEWORK FOR STRATEGIC PROJECT ANALYSIS AND PRIORITIZATION

    Get PDF
    Projects that support the long-term strategic intent and alignment are considered strategic projects. Therefore, these projects must consider their alignment with the organization’s current strategy and focus on the risk, organizational capability, resources availability, political influence, and socio-cultural factors. Quantitative and qualitative methods prioritize the projects; however, they are usually suitable for specific industries. Although prioritization models are used in the private sector, the same in the public sector is not widely seen in the literature. The lack of models in the public sector has happened because of the projects’ social implications, the value perception of different projects in the public sector, and potentially differing value perceptions attached to the types of projects in different decision-making environments in the public sector. The thesis proposes a generic framework to develop a priority list of the available basket of projects and decide on projects for the next undertaking. The focus of the thesis is on public projects. The analysis in the framework considers the critical factors for prioritization obtained from the literature clustered through the agglomerative text clustering technique. In the proposed framework, 13 critical clusters are identified and weighted using the Criteria Importance Through Intercriteria Correlation (CRITIC) method to develop their ranking using the Technique for Order of Preference Similarity Ideal Solution (TOPSIS) method. In addition, the proposed framework uses vector weighting to prioritize projects across industries. The applicability of the framework is demonstrated through Qatar’s real estate and transportation projects. The outcome obtained from the framework is compared with those obtained through the experts using the System Usability Scale (SUS). The comparison shows that the framework provides good predictability of the projects for implementation

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    An evolutionary theory of systemic risk and its mitigation for the global financial system

    Get PDF
    This thesis is the outcome of theory development research into an identified gap in knowledge about systemic risk of the global financial system. It takes a systems-theoretic approach, incorporating a simulation-constructivist orientation towards the meaning of theory and theory development, within a realist constructivism epistemology for knowledge generation about complex social phenomena. The specific purpose of which is to describe systemic risk of failure, and explain how it occurs in the global financial system, in order to diagnose and understand circumstances in which it arises, and offer insights into how that risk may be mitigated. An outline theory is developed, introducing a new operational definition of systemic risk of failure in which notions from evolutionary economics, finance and complexity science are combined with a general interpretation of entropy, to explain how catastrophic phenomena arise in that system. When a conceptual model incorporating the Icelandic financial system failure over the years 2003 – 2008 is constructed from this theory, and the results of simulation experiments using a verified computational representation of the model are validated with empirical data from that event, and corroborated by theoretical triangulation, a null-hypothesis about the theory is refuted. Furthermore, results show that interplay between a lack of diversity in system participation strategies and shared exposure to potential losses may be a key operational mechanism of catastrophic tensions arising in the supply and demand of financial services. These findings suggest new policy guidance for pre-emptive intervention calls for improved operational transparency from system participants, and prompt access to data about their operational behaviour, in order to prevent positive feedback inducing a failure of the system to operate within required parameters. The theory is then revised to reflect new insights exposed by simulation, and finally submitted as a new theory capable of unifying existing knowledge in this problem domain

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • 

    corecore