359 research outputs found
Analysis and forecasting of asset quality, risk management and financial stability for the Greek banking system
The increase in non-performing loans (NPLs) during the financial crisis of 2008, which has been converted into a fiscal crisis, as well as the risk of a medium-term increase due to the COVID-19 pandemic has put into question the robustness of many banks and the financial stability of the whole sector. As far as the banking sector is concerned, the management of non-performing loans represents the most significant challenge as their stock reached unprecedented levels, with the deterioration in asset quality being widespread. Addressing the problem of non-performing loans with the assistance of
credit risk modeling is important from both a micro and a macro-prudential perspective, since it would not only improve the financial soundness and the capital adequacy of the banking sector, but also free-up funds to be directed to other more productive sectors of the economy.
This Thesis extends earlier research by employing a short-term monitoring system with the aim to forecast “failures” i.e. NPL creation. The creation of such a monitoring system allows the risk of a “failure” to change over time, measuring the likelihood of “failure” given the survival time and a set of explanatory variables. The application of Cox proportional hazards models and survival trees to forecast NPLs can be usefully employed in the Greek corporate sectors.
The research aim of this thesis consists of two domains: The first aim is the investigation of the determinants that contribute to the NPLs formation. Two GAMLSS models are being tested, a linear GAMLSS model and a nonlinear semi-parametric GAMLSS model which includes smoothing functions that capture potential nonlinear relationships between the explanatory variables to model the parameters favorably. The explanatory variables of the models consist of credit risk variables, macroeconomic variables, bank-specific variables and supervisory and market variables, while the response variable is the non-performing loans.
The second aim is to provide answers on whether proportional hazards Cox models and survival tree models can forecast NPLs of loans that are provided in specific corporate sectors in Greece by the use of the most granular data set of corporate borrowers. By evaluating a series of Cox models, a short-term monitoring system has been created with the aim to forecast “failures” i.e. NPL creation. The Cox proportional hazards regression models are incorporating time-to-event, involving a timeline, described by the survival function, indicating the probability that a loan becomes an NPL until time t. The time period counts from the origination of the loan until the “death” of the loan, i.e. its termination, incorporating an “in between” observation point. The event is when the loan is initially being “infected”, i.e. has become NPL. Regarding survival trees, the data set was divided into more subsets, which are easier to model separately and hence yield an improved overall performance. Such models are then beneficial to implement with different machine learning techniques. Predictors (or covariates) are defined as the sectors of the Greek economy and the model is fitted both for the whole sample and for the sample of early terminated loans.
The Thesis is organized as follows: Chapter 1 - Introduction addresses the role of banks in financial intermediation, the evolution of credit risk and some issues regarding the Greek banking sector. Chapter 2 constitutes a literature review on research focused on improving the predictive performance of different credit risk assessment methods. Chapter 3 outlines the competitive conditions in the banking sector to demonstrate whether the increase in concentration had affected the competitive conditions in the Greek banking system. In Chapter 4, the funding and the liquidity conditions in the Greek banking sector are being addressed. Chapter 5 contains the selection of aggregate sample, results and analysis of GAMLSS models that have been used for determining NPLs. Chapter 6 provides an introduction to the granular database on Large Exposures, which is used for deriving the panel sample of corporate borrowers whereby models of forecasting and prediction are being employed. Chapter 7 contains the application of Cox models and decision trees, the estimation procedure, parameters, model fit, estimation results and empirical findings. Chapter 8 provides an evaluation and applicability of models as well as the implications for further research. Finally, a conclusion is provided by summarizing my contribution to the research community and my recommendations to the banking industr
Economic and Social Consequences of the COVID-19 Pandemic in Energy Sector
The purpose of the Special Issue was to collect the results of research and experience on the consequences of the COVID-19 pandemic for the energy sector and the energy market, broadly understood, that were visible after a year. In particular, the impact of COVID-19 on the energy sector in the EU, including Poland, and the US was examined. The topics concerned various issues, e.g., the situation of energy companies, including those listed on the stock exchange, mining companies, and those dealing with renewable energy. The topics related to the development of electromobility, managerial competences, energy expenditure of local government units, sustainable development of energy, and energy poverty during a pandemic were also discussed
Multi-Asset Factor Investing Strategies and Controversy Screening using Natural Language Processing
Factor investing strategies have revolutionized the landscape of equity investing, and continues to be heavily researched by academics and practitioners, leading to the documentation of more than 450 factors. However, from a practical investment perspective, much of the factor evidence documented by academics may be more apparent than real. The performance of many factors has found to be dependent on the inclusion of small- and micro-cap stocks in academic studies, although such stocks would likely be excluded from the real investment universe due to illiquidity and transaction costs. We take the perspective of an institutional investor and navigate this zoo of factors by focusing on the evidence relevant to the practicalities of factor-based investment strategies. Establishing a sound theoretical rationale is key to identifying “true” factors, and we emphasize the need to recognize data-mining concerns that may cast doubt on the relevance of many factors. Nevertheless, a parsimonious set of factors emerges in equities and other asset classes, including currencies, fixed income and commodities. Since these factors can serve as meaningful ingredients to factor-based portfolio construction, we build currency factor strategies using the G10 currencies. We show that parametric portfolio policies can help guide an optimal currency strategy when tilting towards cross-sectional factor characteristics. While currency carry serves as the main return generator in this tilting strategy, momentum and value are implicit diversifiers to potentially balance the downside of carry investing in flight-to-quality shifts of foreign exchange investors. Drawing insights from a currency timing strategy, according to time series predictors, we further examine the parametric portfolio policy’s ability to mitigate the downside of the carry trade by incorporating an explicit currency factor timing element. This integrated approach to currency factor investing outperforms a naive equally weighted benchmark as well as univariate and multivariate parametric portfolio policies. Whilst factor investing continues to grow in popularity, investors have expressed interest in aligning their investments with social values in order to maximize positive social impact. Hence, for any company, involvement in socially unethical practices not only leads to reputational damage but also financial consequences, anecdotally. To quantify the consequence of such controversial behaviour, we investigate the price impact of involvement in social controversies and find that the returns drop, on an average, by over 200 basis in the days around the outbreak of news on social violations. We identify companies following socially unethical practices from news headlines with the help of state-of-the-art language modelling approaches. Using a large sample of 1 million news headlines, we further train and fine-tune a DistilRoBERTa model to identify reports of controversial incidents in daily news feed. We map the price reaction of such controversial events using an event study approach and document negative price impact for companies with poor social practices measured via increased controversial behaviour, largely driven by small to medium market capitalization companies. Amongst the eight different social dimensions we examine, controversies surrounding violations of product safety standards, online scams and data privacy breaches significantly impact firm returns. Dissecting this result by geographies, the U.S, Australia, Europe and Emerging Market react very negatively to social controversies
Applying Blockchain Technology to Financial Market’s Infrastructure
The utilization of blockchain technology has gained widespread acceptance across various domains in recent years. Among them, blockchain integration in the financial sector is particularly noteworthy. Blockchain technology offers a range of features that can address various challenges in the financial industry, including decentralization, transparency, enhanced security, and tamper-proofing. Therefore, this thesis aims to investigate the issues that persist in academia and industry and address them through blockchain technology.
The research for this thesis was divided into three major stages. The first stage involved conducting an academic survey through a comprehensive literature review. The aim was to identify the pain points that academics have identified and to narrow down the problems that concern the academic community.
The second stage involved collecting requirements from industry experts. This helped to identify the real-world issues that currently exist in the financial industry. Based on these issues, the research moved on to the next stage.
The third stage involved an experimental study, further divided into two parts. Part 1 involved designing and developing a blockchain-based issuance and trading system for financial products. This system aimed to enhance participant trust, reduce costs, and increase efficiency. Part 2 involved the development of a risk monitoring system for blockchain-based financial products. This system aimed to assist participants in monitoring market risks, providing them with risk warning coefficients, and reducing the probability of systemic risks in the market.
The results of this thesis demonstrate that blockchain technology's feasibility and integration can positively impact financial markets from an experimental perspective. It can be helpful to adopt blockchain technology for financial and FinTech industries
Business Analytics Using Predictive Algorithms
In today's data-driven business landscape, organizations strive to extract actionable insights and make informed decisions using their vast data. Business analytics, combining data analysis, statistical modeling, and predictive algorithms, is crucial for transforming raw data into meaningful information. However, there are gaps in the field, such as limited industry focus, algorithm comparison, and data quality challenges. This work aims to address these gaps by demonstrating how predictive algorithms can be applied across business domains for pattern identification, trend forecasting, and accurate predictions. The report focuses on sales forecasting and topic modeling, comparing the performance of various algorithms including Linear Regression, Random Forest Regression, XGBoost, LSTMs, and ARIMA. It emphasizes the importance of data preprocessing, feature selection, and model evaluation for reliable sales forecasts, while utilizing S-BERT, UMAP, and HDBScan unsupervised algorithms for extracting valuable insights from unstructured textual data
Success Strategies for Information Technology Project Leaders
A growing failure rate of information technology (IT) projects has the potential to create economic loss and low revenue. IT project leaders in the financial industry are concerned that IT project failure has the potential to negatively impact organization expenses. Grounded in Bass’s transformational leadership theory and the Project Manager Book of Knowledge, the purpose of this qualitative multiple case study was to explore the strategies financial industry project leaders used to ensure IT projects succeed. The participants were five participants who had a minimum of 5 years of managing successful IT projects. Data were collected using semistructured interviews and company documentation. Thematic analysis identified five themes: (a) communication of project requirements, (b) planning and analysis, (c) leadership and collaboration, (d) risk management, and (e) governance and continuous improvement. A key recommendation is for skilled project managers to work closely with stakeholders to bridge the projects’ scope, schedule, and cost. The implications for positive social change include the potential for a cascading effect of project success on organizational growth sustaining high-paying jobs and increasing philanthropic giving in communities
Data Analytics for Credit Risk Models in Retail Banking: a new era for the banking system
Given the nature of the lending industry and its importance for global economic stability, financial institutions have always been keen on estimating the risk profile of their clients. For this reason, in the last few years several sophisticated techniques for modelling credit risk have been developed and implemented. After the financial crisis of 2007-2008, credit risk management has been further expanded and has acquired significant regulatory importance. Specifically, Basel II and III Accords have strengthened the conditions that banks must fulfil to develop their own internal models for estimating the regulatory capital and expected losses. After motivating the importance of credit risk modelling in the banking sector, in this contribution we perform a review of the traditional statistical methods used for credit risk management. Then we focus on more recent techniques based on Machine Learning techniques, and we critically compare tradition and innovation in credit risk modelling. Finally, we present a case study addressing the main steps to practically develop and validate a Probability of Default model for risk prediction via Machine Learning Techniques
Improving Demand Forecasting: The Challenge of Forecasting Studies Comparability and a Novel Approach to Hierarchical Time Series Forecasting
Bedarfsprognosen sind in der Wirtschaft unerlässlich. Anhand des erwarteten Kundenbe-darfs bestimmen Firmen beispielsweise welche Produkte sie entwickeln, wie viele Fabri-ken sie bauen, wie viel Personal eingestellt wird oder wie viel Rohmaterial geordert wer-den muss. Fehleinschätzungen bei Bedarfsprognosen können schwerwiegende Auswir-kungen haben, zu Fehlentscheidungen führen, und im schlimmsten Fall den Bankrott einer Firma herbeiführen.
Doch in vielen Fällen ist es komplex, den tatsächlichen Bedarf in der Zukunft zu antizipie-ren. Die Einflussfaktoren können vielfältig sein, beispielsweise makroökonomische Ent-wicklung, das Verhalten von Wettbewerbern oder technologische Entwicklungen. Selbst wenn alle Einflussfaktoren bekannt sind, sind die Zusammenhänge und Wechselwirkun-gen häufig nur schwer zu quantifizieren.
Diese Dissertation trägt dazu bei, die Genauigkeit von Bedarfsprognosen zu verbessern.
Im ersten Teil der Arbeit wird im Rahmen einer überfassenden Übersicht über das gesamte Spektrum der Anwendungsfelder von Bedarfsprognosen ein neuartiger Ansatz eingeführt, wie Studien zu Bedarfsprognosen systematisch verglichen werden können und am Bei-spiel von 116 aktuellen Studien angewandt. Die Vergleichbarkeit von Studien zu verbes-sern ist ein wesentlicher Beitrag zur aktuellen Forschung. Denn anders als bspw. in der Medizinforschung, gibt es für Bedarfsprognosen keine wesentlichen vergleichenden quan-titativen Meta-Studien. Der Grund dafür ist, dass empirische Studien für Bedarfsprognosen keine vereinheitlichte Beschreibung nutzen, um ihre Daten, Verfahren und Ergebnisse zu beschreiben. Wenn Studien hingegen durch systematische Beschreibung direkt miteinan-der verglichen werden können, ermöglicht das anderen Forschern besser zu analysieren, wie sich Variationen in Ansätzen auf die Prognosegüte auswirken – ohne die aufwändige Notwendigkeit, empirische Experimente erneut durchzuführen, die bereits in Studien beschrieben wurden. Diese Arbeit führt erstmals eine solche Systematik zur Beschreibung ein.
Der weitere Teil dieser Arbeit behandelt Prognoseverfahren für intermittierende Zeitreihen, also Zeitreihen mit wesentlichem Anteil von Bedarfen gleich Null. Diese Art der Zeitreihen erfüllen die Anforderungen an Stetigkeit der meisten Prognoseverfahren nicht, weshalb gängige Verfahren häufig ungenügende Prognosegüte erreichen. Gleichwohl ist die Rele-vanz intermittierender Zeitreihen hoch – insbesondere Ersatzteile weisen dieses Bedarfs-muster typischerweise auf. Zunächst zeigt diese Arbeit in drei Studien auf, dass auch die getesteten Stand-der-Technik Machine Learning Ansätze bei einigen bekannten Datensät-zen keine generelle Verbesserung herbeiführen. Als wesentlichen Beitrag zur Forschung zeigt diese Arbeit im Weiteren ein neuartiges Verfahren auf: Der Similarity-based Time Series Forecasting (STSF) Ansatz nutzt ein Aggregation-Disaggregationsverfahren basie-rend auf einer selbst erzeugten Hierarchie statistischer Eigenschaften der Zeitreihen. In Zusammenhang mit dem STSF Ansatz können alle verfügbaren Prognosealgorithmen eingesetzt werden – durch die Aggregation wird die Stetigkeitsbedingung erfüllt. In Expe-rimenten an insgesamt sieben öffentlich bekannten Datensätzen und einem proprietären Datensatz zeigt die Arbeit auf, dass die Prognosegüte (gemessen anhand des Root Mean Square Error RMSE) statistisch signifikant um 1-5% im Schnitt gegenüber dem gleichen Verfahren ohne Einsatz von STSF verbessert werden kann. Somit führt das Verfahren eine wesentliche Verbesserung der Prognosegüte herbei.
Zusammengefasst trägt diese Dissertation zum aktuellen Stand der Forschung durch die zuvor genannten Verfahren wesentlich bei. Das vorgeschlagene Verfahren zur Standardi-sierung empirischer Studien beschleunigt den Fortschritt der Forschung, da sie verglei-chende Studien ermöglicht. Und mit dem STSF Verfahren steht ein Ansatz bereit, der zuverlässig die Prognosegüte verbessert, und dabei flexibel mit verschiedenen Arten von Prognosealgorithmen einsetzbar ist. Nach dem Erkenntnisstand der umfassenden Literatur-recherche sind keine vergleichbaren Ansätze bislang beschrieben worden
- …