4,352,643 research outputs found

    Financial Data Transparency, International Institutions, and Sovereign Borrowing Costs

    Get PDF
    Recent events in international finance illustrate the close connection between the viability of a country's major private financial institutions and the sustainability of its sovereign debt. We explore the precise nature of this connection and the ways in which it shapes investors’ expectations of sovereign creditworthiness. We consider how investors use the overall level of information available about the private financial sector—and the potential risks it poses to government finances—when making decisions about investing in sovereign debt. We expect that governments providing more information about the private financial sector will have lower, and less volatile, borrowing costs. In order to test this argument, we create a new Financial Data Transparency (FDT) Index measuring governments’ willingness to release credible financial system data. Using the FDT and a sample of high-income OECD countries, we find that such transparency reduces sovereign borrowing costs. The effects are conditional on the level of public indebtedness. Transparent countries with low debt enjoy lower and less volatile borrowing costs

    Copula estimation for nonsynchronous financial data

    Full text link
    Copula is a powerful tool to model multivariate data. We propose the modelling of intraday financial returns of multiple assets through copula. The problem originates due to the asynchronous nature of intraday financial data. We propose a consistent estimator of the correlation coefficient in case of Elliptical copula and show that the plug-in copula estimator is uniformly convergent. For non-elliptical copulas, we capture the dependence through Kendall's Tau. We demonstrate underestimation of the copula parameter and use a quadratic model to propose an improved estimator. In simulations, the proposed estimator reduces the bias significantly for a general class of copulas. We apply the proposed methods to real data of several stock prices

    Small scale behavior of financial data

    Full text link
    A new approach is presented to describe the change in the statistics of the log return distribution of financial data as a function of the timescale. To this purpose a measure is introduced, which quantifies the distance of a considered distribution to a reference distribution. The existence of a small timescale regime is demonstrated, which exhibits different properties compared to the normal timescale regime. This regime seems to be universal for individual stocks. It is shown that the existence of this small timescale regime is not dependent on the special choice of the distance measure or the reference distribution. These findings have important implications for risk analysis, in particular for the probability of extreme events.Comment: 4 pages, 6 figures Calculations for the turbulence data sets were redone using the log return as the increment definition in order to provide better comparison to the results for financial asset

    Analysis of Binarized High Frequency Financial Data

    Full text link
    A non-trivial probability structure is evident in the binary data extracted from the up/down price movements of very high frequency data such as tick-by-tick data for USD/JPY. In this paper, we analyze the Sony bank USD/JPY rates, ignoring the small deviations from the market price. We then show there is a similar non-trivial probability structure in the Sony bank rate, in spite of the Sony bank rate's having less frequent and larger deviations than tick-by-tick data. However, this probability structure is not found in the data which has been sampled from tick-by-tick data at the same rate as the Sony bank rate. Therefore, the method of generating the Sony bank rate from the market rate has the potential for practical use since the method retains the probability structure as the sampling frequency decreases.Comment: 8pages, 4figures, contribution to the 3rd International Conference NEXT-SigmaPh

    Building an Effective Data Warehousing for Financial Sector

    Full text link
    This article presents the implementation process of a Data Warehouse and a multidimensional analysis of business data for a holding company in the financial sector. The goal is to create a business intelligence system that, in a simple, quick but also versatile way, allows the access to updated, aggregated, real and/or projected information, regarding bank account balances. The established system extracts and processes the operational database information which supports cash management information by using Integration Services and Analysis Services tools from Microsoft SQL Server. The end-user interface is a pivot table, properly arranged to explore the information available by the produced cube. The results have shown that the adoption of online analytical processing cubes offers better performance and provides a more automated and robust process to analyze current and provisional aggregated financial data balances compared to the current process based on static reports built from transactional databases.Comment: 10 page

    Long range financial data and model choice

    Get PDF
    Long range financial data as typified by the daily returns of the Standard and Poor's index exhibit common features such as heavy tails, long range memory of the absolute values and clustering of periods of high and low volatility. These and other features are often referred to as stylized facts and parametric models for such data are required to reproduce them in some sense. Typically this is done by simulating some data sets under the model and demonstrating that the simulations also exhibits the stylized facts. Nevertheless when the parameters of such models are to be estimated recourse is very often taken to likelihood either in the form of maximum likelihood or Bayes. In this paper we expound a method of determining parameter values which depends solely on the ability of the model to reproduce the relevant features of the data set. We introduce a new measure of the volatility of the volatility and show how it can be combined with the distribution of the returns and the autocorrelation of the absolute returns to determine parameter values. We also give a parametric model for such data and show that it can reproduce the required features. --
    corecore