3 research outputs found
Properties and Performance of the ABCDe Random Graph Model with Community Structure
In this paper, we investigate properties and performance of synthetic random
graph models with a built-in community structure. Such models are important for
evaluating and tuning community detection algorithms that are unsupervised by
nature. We propose ABCDe, a multi-threaded implementation of the ABCD
(Artificial Benchmark for Community Detection) graph generator. We discuss the
implementation details of the algorithm and compare it with both the previously
available sequential version of the ABCD model and with the parallel
implementation of the standard and extensively used LFR
(Lancichinetti--Fortunato--Radicchi) generator. We show that ABCDe is more than
ten times faster and scales better than the parallel implementation of LFR
provided in NetworKit. Moreover, the algorithm is not only faster but random
graphs generated by ABCD have similar properties to the ones generated by the
original LFR algorithm, while the parallelized NetworKit implementation of LFR
produces graphs that have noticeably different characteristics.Comment: 15 pages, 10 figures, 1 tabl
Assessment of the size of VaR backtests for small samples
The market risk management process includes the quantification of the risk connected with defined portfolios of assets and the diagnostics of the risk model. Value at Risk (VaR) is one of the most common market risk measures. Since the distributions of the daily P&L of financial instruments are unobservable, literature presents a broad range of backtests for VaR diagnostics. In this paper, we propose a new methodological approach to the assessment of the size of VaR backtests, and use it to evaluate the size of the most distinctive and popular backtests. The focus of the paper is directed towards the evaluation of the size of the backtests for small-sample cases - a typical situation faced during VaR backtesting in banking practice. The results indicate significant differences between tests in terms of the p-value distribution. In particular, frequency-based tests exhibit significantly greater discretisation effects than duration-based tests. This difference is especially apparent in the case of small samples. Our findings prove that from among the considered tests, the Kupiec TUFF and the Haas Discrete Weibull have the best properties. On the other hand, backtests which are very popular in banking practice, that is the Kupiec POF and Christoffersen's Conditional Coverage, show significant discretisation, hence deviations from the theoretical size
Assessment of the size of VaR backtests for small samples
The market risk management process includes the quantification of the risk connected with defined portfolios of assets and the diagnostics of the risk model. Value at Risk (VaR) is one of the most common market risk measures. Since the distributions of the daily P&L of financial instruments are unobservable, literature presents a broad range of backtests for VaR diagnostics. In this paper, we propose a new methodological approach to the assessment of the size of VaR backtests, and use it to evaluate the size of the most distinctive and popular backtests. The focus of the paper is directed towards the evaluation of the size of the backtests for small-sample cases - a typical situation faced during VaR backtesting in banking practice. The results indicate significant differences between tests in terms of the p-value distribution. In particular, frequency-based tests exhibit significantly greater discretisation effects than duration-based tests. This difference is especially apparent in the case of small samples. Our findings prove that from among the considered tests, the Kupiec TUFF and the Haas Discrete Weibull have the best properties. On the other hand, backtests which are very popular in banking practice, that is the Kupiec POF and Christoffersen's Conditional Coverage, show significant discretisation, hence deviations from the theoretical size