1,859 research outputs found
Analysing behavioural factors that impact financial stock returns. The case of COVID-19 pandemic in the financial markets.
This thesis represents a pivotal advancement in the realm of behavioural finance, seamlessly integrating both classical and state-of-the-art models. It navigates the performance and applicability of the Irrational Fractional Brownian Motion (IFBM) model, while also delving into the propagation of investor sentiment, emphasizing the indispensable role of hands-on experiences in understanding, applying, and refining complex financial models.
Financial markets, characterized by âfat tailsâ in price change distributions, often challenge traditional models such as the Geometric Brownian Motion (GBM). Addressing this, the research pivots towards the Irrational Fractional Brownian Motion Model (IFBM), a groundbreaking model initially proposed by (Dhesi and Ausloos, 2016) and further enriched in (Dhesi et al., 2019). This model, tailored to encapsulate the âfat tailâ behaviour in asset returns, serves as the linchpin for the first chapter of this thesis.
Under the insightful guidance of Gurjeet Dhesi, a co-author of the IFBM model, we delved into its intricacies and practical applications. The first chapter aspires to evaluate the IFBMâs performance in real-world scenarios, enhancing its methodological robustness. To achieve this, a tailored algorithm was crafted for its rigorous testing, alongside the application of a modified Chi-square test for stability assessment. Furthermore, the deployment of Shannonâs entropy, from an information theory perspective, offers a nuanced understanding of the model. The S&P500 data is wielded as an empirical testing bed, reflecting real-world financial market dynamics. Upon confirming the modelâs robustness, the IFBM is then applied to FTSE data during the tumultuous COVID-19 phase. This period, marked by extraordinary market oscillations, serves as an ideal backdrop to assess the IFBMâs capability in tracking extreme market shifts.
Transitioning to the second chapter, the focus shifts to the potentially influential realm of investor sentiment, seen as one of the many factors contributing to fat tailsâ presence in return distributions. Building on insights from (Baker and Wurgler, 2007), we examine the potential impact of political speeches and daily briefings from 10 Downing Street during the COVID-19 crisis on market sentiment. Recognizing the profound market impact of such communications, the chapter seeks correlations between these briefings and market fluctuations.
Employing advanced Natural Language Processing (NLP) techniques, this chapter harnesses the power of the Bidirectional Encoder Representations from Transformers (BERT) algorithm (Devlin et al., 2018) to extract sentiment from governmental communications. By comparing the derived sentiment scores with stock market indicesâ performance metrics, potential relationships between public communications and market trajectories are unveiled. This approach represents a melding of traditional finance theory with state-of-the-art machine learning techniques, offering a fresh lens through which the dynamics of market behaviour can be understood in the context of external communications.
In conclusion, this thesis provides an intricate examination of the IFBM modelâs performance and the influence of investor sentiment, especially under crisis conditions. This exploration not only advances the discourse in behavioural finance but also underscores the pivotal role of sophisticated models in understanding and predicting market trajectories
Applications of Deep Learning Models in Financial Forecasting
In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting.
The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with
approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data.
The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC eventsâa task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to
financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided
Sentiment analysis of financial Twitter posts on Twitter with the machine learning classifiers
This paper presents a sentiment analysis combining the lexicon-based and machine learning (ML)-based approaches in Turkish to investigate the public mood for the prediction of stock market behavior in BIST30, Borsa Istanbul. Our main motivation behind this study is to apply sentiment analysis to financial-related tweets in Turkish. We import 17189 tweets posted as "#Borsaistanbul, #Bist, #Bist30, #Bist100âł on Twitter between November 7, 2022, and November 15, 2022, via a MAXQDA 2020, a qualitative data analysis program. For the lexicon-based side, we use a multilingual sentiment offered by the Orange program to label the polarities of the 17189 samples as positive, negative, and neutral labels. Neutral labels are discarded for the machine learning experiments. For the machine learning side, we select 9076 data as positive and negative to implement the classification problem with six different supervised machine learning classifiers conducted in Python 3.6 with the sklearn library. In experiments, 80Â % of the selected data is used for the training phase and the rest is used for the testing and validation phase. Results of the experiments show that the Support Vector Machine and Multilayer Perceptron classifier perform better than other classifiers with 0.89 and 0.88 accuracy and AUC values of 0.8729 and 0.8647 respectively. Other classifiers obtain approximately a 78,5Â % accuracy rate. It is possible to increase sentiment analysis accuracy with parameter optimization on a larger, cleaner, and more balanced dataset by changing the pre-processing steps. This work can be expanded in the future to develop better sentiment analysis using deep learning approaches
Advances in machine learning algorithms for financial risk management
In this thesis, three novel machine learning techniques are introduced to address distinct
yet interrelated challenges involved in financial risk management tasks. These approaches
collectively offer a comprehensive strategy, beginning with the precise classification of credit
risks, advancing through the nuanced forecasting of financial asset volatility, and ending
with the strategic optimisation of financial asset portfolios.
Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk
assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture
modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed
using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression
model is then applied to predict the probability of default using the heuristically balanced
datasets. The results underscore the effectiveness of our proposed technique, with superior
performance observed in comparison to other imbalanced preprocessing approaches. This
advancement in credit risk classification lays a solid foundation for understanding individual
financial behaviours, a crucial first step in the broader context of financial risk management.
Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a
Triple Discriminator Generative Adversarial Network with a continuous wavelet transform
is proposed. The proposed model has the ability to decompose volatility time series into
signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform
component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a
Generative Adversarial Network consisting of triple Discriminator and Generator networks.
The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised
loss and reconstruction loss as part of its framework. Data from nine financial assets are
employed to demonstrate the effectiveness of the proposed model. This approach not only
enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis.
Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio
optimisation using historical Low, High, and Close prices of assets as input with weights of
assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return
on investment based on deep reinforcement learning. To provide more learning stability in
an online training process, a Markov Differential Sharpe Ratio reward function has been
proposed as the reinforcement learning objective function. Additionally, a Multi-Memory
Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout
a specified trading period. The use of the insights gained from volatility forecasting into
this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving
superior results based on risk-adjusted reward performance measures.
In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the
accuracy of credit risk classification, through the improvement and understanding of market
volatility, to optimisation of investment strategies. These methodologies collectively show
the potential of the use of machine learning to improve financial risk management
Robustness, Heterogeneity and Structure Capturing for Graph Representation Learning and its Application
Graph neural networks (GNNs) are potent methods for graph representation learn- ing (GRL), which extract knowledge from complicated (graph) structured data in various real-world scenarios. However, GRL still faces many challenges. Firstly GNN-based node classification may deteriorate substantially by overlooking the pos- sibility of noisy data in graph structures, as models wrongly process the relation among nodes in the input graphs as the ground truth. Secondly, nodes and edges have different types in the real-world and it is essential to capture this heterogeneity in graph representation learning. Next, relations among nodes are not restricted to pairwise relations and it is necessary to capture the complex relations accordingly. Finally, the absence of structural encodings, such as positional information, deterio- rates the performance of GNNs. This thesis proposes novel methods to address the aforementioned problems:
1. Bayesian Graph Attention Network (BGAT): Developed for situations with scarce data, this method addresses the influence of spurious edges. Incor- porating Bayesian principles into the graph attention mechanism enhances robustness, leading to competitive performance against benchmarks (Chapter 3).
2. Neighbour Contrastive Heterogeneous Graph Attention Network (NC-HGAT): By enhancing a cutting-edge self-supervised heterogeneous graph neural net- work model (HGAT) with neighbour contrastive learning, this method ad- dresses heterogeneity and uncertainty simultaneously. Extra attention to edge relations in heterogeneous graphs also aids in subsequent classification tasks (Chapter 4).
3. A novel ensemble learning framework is introduced for predicting stock price movements. It adeptly captures both group-level and pairwise relations, lead- ing to notable advancements over the existing state-of-the-art. The integration of hypergraph and graph models, coupled with the utilisation of auxiliary data via GNNs before recurrent neural network (RNN), provides a deeper under- standing of long-term dependencies between similar entities in multivariate time series analysis (Chapter 5).
4. A novel framework for graph structure learning is introduced, segmenting graphs into distinct patches. By harnessing the capabilities of transformers and integrating other position encoding techniques, this approach robustly capture intricate structural information within a graph. This results in a more comprehensive understanding of its underlying patterns (Chapter 6)
Digitalization and Development
This book examines the diffusion of digitalization and Industry 4.0 technologies in Malaysia by focusing on the ecosystem critical for its expansion. The chapters examine the digital proliferation in major sectors of agriculture, manufacturing, e-commerce and services, as well as the intermediary organizations essential for the orderly performance of socioeconomic agents.
The book incisively reviews policy instruments critical for the effective and orderly development of the embedding organizations, and the regulatory framework needed to quicken the appropriation of socioeconomic synergies from digitalization and Industry 4.0 technologies. It highlights the importance of collaboration between government, academic and industry partners, as well as makes key recommendations on how to encourage adoption of IR4.0 technologies in the short- and long-term.
This book bridges the concepts and applications of digitalization and Industry 4.0 and will be a must-read for policy makers seeking to quicken the adoption of its technologies
The development of an international model for technology adoption: the case of Hong Kong
The purpose of this study is to examine the causal relationships between the internal beliefs formation of a decision-maker on technology adoption and the extent of the development of a technology adoptive behaviour. In particular, this study aims to develop an International Model For Technology Adoption (IMTA), which builds upon the Theory of Planned Behaviour (Ajzen 1992) and improves on the framework of the Technology Acceptance Model (Davis 1986).
The development of such a model requires an understanding of the environmental factors which shape the cognitive processes of the decision maker. Hence, this is a behavioural model which investigates the constructs influencing the adoption behaviour and how the interaction between these constructs and the external variables can impact on the decision making process at the level of the firm.
Previous research on technology transfer and innovation diffusion has classified factors affecting the diffusion process into two dimensions: 1) external-influence and 2) internal-influence. Hence, in this research, the International Model For Technology Adoption looks at how the endogenous and exogenous factors enter into the cognitive process of a technology adoption decision through which attitudes and behavioural intentions are shaped.
Under the IMTA, the behavioural intention to adopt is a function of two exogenous variables, 1) Strategic Choice, and 2) Environmental Control. The Environmental Control factor is further categorised by two exogenous factors, namely, 1) Government Influence, and 2) Competitive Influence. In addition, the Competitive Influence factor is, in turn, classified into five forces: namely, 1) Industry Structure, 2) Price Intensity, 3) Demand Uncertainty, 4) Information Exposure, 5) Domestic Availability.
Regarding the cognitive process which forms the attitude to adopt, it is hypothesised to be affected by six other endogenous beliefs: 1) Compatibility; 2) Enhanced Value; 3) Perceived Benefits; 4)Adaptative Experiences, 5) Perceived Difficulty; and 6) Suppliersâ Commitment.
A survey research method was utilised in this study and the research instrument was developed after a comprehensive review of the relevant literature and an expert interview. A total of 298 completed questionnaires were returned; giving a response rate of 13.56%. Of the 298 questionnaires, 39 of the responses were unusable with missing date. This gives a total of 259 usable questionnaires and an effective response rate of 11.78%.
The results of the analysis suggested that the fitness of the International Model For Technology Adoption was good and the data of this study supported the overall structure of the IMTA. When compared with the null model, which was used by the EQS as a baseline model to judge to overall fitness for the IMTA, the IMTA yielded a value of 0.914 in the Comparative Fit index; hence, indication of a good fit model.
In addition, the results of the principal component analysis also illustrated that the 16-factor International Model For Technology Adoption was an adequate model to capture the information collected during the survey. The results shown that this 16-factor structure represented nearly 77% of the total variance of all items. A further analysis into the factor structure, again, revealed that there existed a perfect match between the conceptual dimensionality of the International Model For Technology Adoption and the empirical data collected in the survey.
However, the results of the hypotheses testing on the individual constructs were mixed. While not all the magnitude of these ten hypotheses was statistically significant, almost all pointed to the direction conceptualised by the IMTA.
From these results, it can be interpreted that while the results of the structural equation modelling analysis provided overall support to the International Model For Technology Adoption, the results of individual constructs of the Model revealed that some constructs were forming a larger impact than others in the decision making process to adopt foreign technology. In particular, the intention to adopt was greatly affected by the attitude of the prospective adopters, the influence of the government and the degree of industry rivalry. However, the impact of the overall competitive influence factor on the intention to adopt was not supported by the results. Again, the existence of investment alternative was also not a serious concern for the prospective adopters
TM-vector: A Novel Forecasting Approach for Market stock movement with a Rich Representation of Twitter and Market data
Stock market forecasting has been a challenging part for many analysts and
researchers. Trend analysis, statistical techniques, and movement indicators
have traditionally been used to predict stock price movements, but text
extraction has emerged as a promising method in recent years. The use of neural
networks, especially recurrent neural networks, is abundant in the literature.
In most studies, the impact of different users was considered equal or ignored,
whereas users can have other effects. In the current study, we will introduce
TM-vector and then use this vector to train an IndRNN and ultimately model the
market users' behaviour. In the proposed model, TM-vector is simultaneously
trained with both the extracted Twitter features and market information.
Various factors have been used for the effectiveness of the proposed
forecasting approach, including the characteristics of each individual user,
their impact on each other, and their impact on the market, to predict market
direction more accurately. Dow Jones 30 index has been used in current work.
The accuracy obtained for predicting daily stock changes of Apple is based on
various models, closed to over 95\% and for the other stocks is significant.
Our results indicate the effectiveness of TM-vector in predicting stock market
direction.Comment: 24 pag
- âŠ