87 research outputs found

    On the Improvement of Default Forecast Through Textual Analysis

    Get PDF
    Textual analysis is a widely used methodology in several research areas. In this paper we apply textual analysis to augment the conventional set of account defaults drivers with new text based variables. Through the employment of ad hoc dictionaries and distance measures we are able to classify each account transaction into qualitative macro-categories. The aim is to classify bank account users into different client profiles and verify whether they can act as effective predictors of default through supervised classification models

    Big data analysis for financial risk management

    Get PDF
    A very important area of financial risk management is systemic risk modelling, which concerns the estimation of the interrelationships between financial institutions, with the aim of establishing which of them are more central and, therefore, more contagious/subject to contagion. The aim of this paper is to develop a novel systemic risk model. A model that, differently from existing ones, employs not only the information contained in financial market prices, but also big data coming from financial tweets. From a methodological viewpoint, the novelty of our paper is the estimation of systemic risk models using two different data sources: financial markets and financial tweets, and a proposal to combine them, using a Bayesian approach. From an applied viewpoint, we present the first systemic risk model based on big data, and show that such a model can shed further light on the interrelationships between financial institutions

    Information theoretic causality detection between financial and sentiment data

    Get PDF
    The interaction between the flow of sentiment expressed on blogs and media and the dynamics of the stock market prices are analyzed through an information-theoretic measure, the transfer entropy, to quantify causality relations. We analyzed daily stock price and daily social media sentiment for the top 50 companies in the Standard & Poor (S&P) index during the period from November 2018 to November 2020. We also analyzed news mentioning these companies during the same period. We found that there is a causal flux of information that links those companies. The largest fraction of significant causal links is between prices and between sentiments, but there is also significant causal information which goes both ways from sentiment to prices and from prices to sentiment. We observe that the strongest causal signal between sentiment and prices is associated with the Tech sector

    Information network modeling for U.S. banking systemic risk

    Get PDF
    In this work we investigate whether information theory measures like mutual information and transfer entropy, extracted from a bank network, Granger cause financial stress indexes like LIBOR-OIS (London Interbank Offered Rate-Overnight Index Swap) spread, STLFSI (St. Louis Fed Financial Stress Index) and USD/CHF (USA Dollar/Swiss Franc) exchange rate. The information theory measures are extracted from a Gaussian Graphical Model constructed from daily stock time series of the top 74 listed US banks. The graphical model is calculated with a recently developed algorithm (LoGo) which provides very fast inference model that allows us to update the graphical model each market day. We therefore can generate daily time series of mutual information and transfer entropy for each bank of the network. The Granger causality between the bank related measures and the financial stress indexes is investigated with both standard Granger-causality and Partial Granger-causality conditioned on control measures representative of the general economy conditions

    Network Based Evidence of the Financial Impact of Covid-19 Pandemic

    Get PDF
    How much the largest worldwide companies, belonging to different sectors of the economy, are suffering from the pandemic? Are economic relations among them changing? In this paper, we address such issues by analyzing the top 50 S&P companies by means of market and textual data. Our work proposes a network analysis model that combines such two types of information to highlight the connections among companies with the purpose of investigating the relationships before and during the pandemic crisis. In doing so, we leverage a large amount of textual data through the employment of a sentiment score which is coupled with standard market data. Our results show that the COVID-19 pandemic has largely affected the US productive system, however differently sector by sector and with more impact during the second wave compared to the first

    Initial Coin Offerings: Risk or Opportunity?

    Get PDF
    Initial coin offerings (ICOs) are one of the several by-products in the world of the cryptocurrencies. Start-ups and existing businesses are turning to alternative sources of capital as opposed to classical channels like banks or venture capitalists. They can offer the inner value of their business by selling "tokens," i.e., units of the chosen cryptocurrency, like a regular firm would do by means of an IPO. The investors, of course, hope for an increase in the value of the token in the short term, provided a solid and valid business idea typically described by the ICO issuers in a white paper. However, fraudulent activities perpetrated by unscrupulous actors are frequent and it would be crucial to highlight in advance clear signs of illegal money raising. In this paper, we employ statistical approaches to detect what characteristics of ICOs are significantly related to fraudulent behavior. We leverage a number of different variables like: entrepreneurial skills, Telegram chats, and relative sentiment for each ICO, type of business, issuing country, team characteristics. Through logistic regression, multinomial logistic regression, and text analysis, we are able to shed light on the riskiest ICOs

    Assessing Banks' Distress Using News and Regular Financial Data

    Get PDF
    In this paper, we focus our attention on leveraging the information contained in financial news to enhance the performance of a bank distress classifier. The news information should be analyzed and inserted into the predictive model in the most efficient way and this task deals with the issues related to Natural Language interpretation and to the analysis of news media. Among the different models proposed for such purpose, we investigate a deep learning approach. The methodology is based on a distributed representation of textual data obtained from a model (Doc2Vec) that maps the documents and the words contained within a text onto a reduced latent semantic space. Afterwards, a second supervised feed forward fully connected neural network is trained combining news data distributed representations with standard financial figures in input. The goal of the model is to classify the corresponding banks in distressed or tranquil state. The final aim is to comprehend both the improvement of the predictive performance of the classifier and to assess the importance of news data in the classification process. This to understand if news data really bring useful information not contained in standard financial variables.</p

    Deep Learning for Assessing Banks’ Distress from News and Numerical Financial Data

    Get PDF
    In this paper we focus our attention on the exploitation of the information contained in financial news to enhance the performance of a classifier of bank distress. Such information should be analyzed and inserted into the predictive model in the most efficient way and this task deals with the issues related to text analysis and specifically to the analysis of news media.Among the different models proposed for such purpose, we investigate one of the possible deep learning approaches, based on a doc2vec representation of the textual data, a kind of neural network able to map the sequence of words contained within a text onto a reduced latent semantic space. Afterwards, a second supervised neural network is trained combining news data with standard financial figures to classify banks whether in distressed or tranquil states. Indeed, the final aim is not only the improvement of the predictive performance of the classifier but also to assess the importance of news data in the classification process. Does news data really bring more useful information not contained in standard financial variables? Our results seem to confirm such hypothesis. </p
    • …
    corecore