88,003 research outputs found

    Kernel alternatives to aproximate operational severity distribution: an empirical application

    Get PDF
    The estimation of severity loss distribution is one the main topic in operational risk estimation. Numerous parametric estimations have been suggested although very few work for both high frequency small losses and low frequency big losses. In this paper several estimation are explored. The good performance of the double transformation kernel estimation in the context of operational risk severity is worthy of a special mention. This method is based on the work of Bolancé and Guillén (2009), it was initially proposed in the context of the cost of claims insurance, and it means an advance in operational risk research

    A multiple factor model for European stocks

    Get PDF
    We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.Vorgestellt wird eine empirische Studie, welche die Schätzung eines fundamentalen Multi-Faktor-Modells für ein Universum europäischer Aktien beinhaltet. Als Methode wurde in Anlehnung an die Vorgehensweise im BARRA-Modell der Querschnittsanalyse der Vorzug gegeben. Der Anteil der erklärten Varianz beläuft sich in den wöchentlichen Regressionen auf 7,3% bis 66,3% bei einem Durchschnitt von 32,9%. Für die einzelnen Faktoren wird die Häufigkeit angegeben, mit der sie sich in den Regressionen signifikant erwiesen haben. Den höchsten Erklärungsgehalt im Untersuchungszeitraum hatten Länderfaktoren, aber auch Konstrukte wie „Success“ oder „Variability in Markets“

    Influence of Context on Item Parameters in Forced-Choice Personality Assessments

    Get PDF
    A fundamental assumption in computerized adaptive testing (CAT) is that item parameters are invariant with respect to context – items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the influence of context on item parameters by comparing parameter estimates from two FC instruments. The first instrument was compiled of blocks of three items, whereas in the second, the context was manipulated by adding one item to each block, resulting in blocks of four. The item parameter estimates were highly similar. However, a small number of significant deviations were observed, confirming the importance of context when designing adaptive FC assessments. Two patterns of such deviations were identified, and methods to reduce their occurrences in a FC CAT setting were proposed. It was shown that with a small proportion of violations of the parameter invariance assumption, score estimation remained stable

    Exchange Rate Market Expectations and Central Bank Policy: The case of the Mexican Peso-US Dollar from 2005-2009

    Get PDF
    We examine two approaches characterized by different tail features to extract market expectations on the Mexican peso-US dollar exchange rate. Expectations are gauged by risk-neutral densities. The methods used to estimate these densities are the Volatility Function Technique (VFT) and the Generalized Extreme Value (GEV) approach. We compare these methods in the context of monetary policy announcements in Mexico and the US. Once the surprise component of the announcements is considered, our results indicate that, although both VFT and GEV suggest similar dynamics at the center of the distribution, these two methods show significantly different patterns in the tails. Our empirical evidence shows that the GEV model captures better the extreme values.Exchange rates, monetary policy, risk-neutral densities

    Value at risk models in finance

    Get PDF
    The main objective of this paper is to survey and evaluate the performance of the most popular univariate VaR methodologies, paying particular attention to their underlying assumptions and to their logical flaws. In the process, we show that the Historical Simulation method and its variants can be considered as special cases of the CAViaR framework developed by Engle and Manganelli (1999). We also provide two original methodological contributions. The first one introduces the extreme value theory into the CAViaR model. The second one concerns the estimation of the expected shortfall (the expected loss, given that the return exceeded the VaR) using a regression technique. The performance of the models surveyed in the paper is evaluated using a Monte Carlo simulation. We generate data using GARCH processes with different distributions and compare the estimated quantiles to the true ones. The results show that CAViaR models perform best with heavy-tailed DGP. JEL Classification: C22, G22CAViaR, extreme value theory, Value at Risk

    Time-varying conditional Johnson SU density in value-at-risk (VaR) methodology

    Get PDF
    Stylized facts on financial time series data are the volatility of returns that follow non-normal conditions such as leverage effects and heavier tails leading returns to have heavier magnitudes of extreme losses. Value-at-risk is a standard method of forecasting possible future losses in investments. A procedure of estimating value-at-risk using time-varying conditional Johnson SU¬ distribution is introduced and assessed with econometric models. The Johnson distribution offers the ability to model higher parameters with time-varying structure using maximum likelihood estimation techniques. Two procedures of modeling with the Johnson distribution are introduced: joint estimation of the volatility and two-step procedure where estimation of the volatility is separate from the estimation of higher parameters. The procedures were demonstrated on Philippine-foreign exchange rates and the Philippine stock exchange index. They were assessed with forecast evaluation measures with comparison to different value-at-risk methodologies. The research opens up modeling procedures where manipulation of higher parameters can be integrated in the value-at-risk methodology.Time Varying Parameters; GARCH models; Nonnormal distributions; Risk Management

    A quantitative evaluation of physical and digital approaches to centre of mass estimation

    Get PDF
    Centre of mass is a fundamental anatomical and biomechanical parameter. Knowledge of centre of mass is essential to inform studies investigating locomotion and other behaviours, through its implications for segment movements, and on whole body factors such as posture. Previous studies have estimated centre of mass position for a range of organisms, using various methodologies. However, few studies assess the accuracy of the methods that they employ, and often provide only brief details on their methodologies. As such, no rigorous, detailed comparisons of accuracy and repeatability within and between methods currently exist. This paper therefore seeks to apply three methods common in the literature (suspension, scales and digital modelling) to three 'calibration objects' in the form of bricks, as well as three birds to determine centre of mass position. Application to bricks enables conclusions to be drawn on the absolute accuracy of each method, in addition to comparing these results to assess the relative value of these methodologies. Application to birds provided insights into the logistical challenges of applying these methods to biological specimens. For bricks, we found that, provided appropriate repeats were conducted, the scales method yielded the most accurate predictions of centre of mass (within 1.49 mm), closely followed by digital modelling (within 2.39 mm), with results from suspension being the most distant (within 38.5 mm). Scales and digital methods both also displayed low variability between centre of mass estimates, suggesting they can accurately and consistently predict centre of mass position. Our suspension method resulted not only in high margins of error, but also substantial variability, highlighting problems with this method

    Assessment of the Accuracy of a Multi-Beam LED Scanner Sensor for Measuring Olive Canopies

    Get PDF
    MDPI. CC BYCanopy characterization has become important when trying to optimize any kind of agricultural operation in high-growing crops, such as olive. Many sensors and techniques have reported satisfactory results in these approaches and in this work a 2D laser scanner was explored for measuring canopy trees in real-time conditions. The sensor was tested in both laboratory and field conditions to check its accuracy, its cone width, and its ability to characterize olive canopies in situ. The sensor was mounted on a mast and tested in laboratory conditions to check: (i) its accuracy at different measurement distances; (ii) its measurement cone width with different reflectivity targets; and (iii) the influence of the target’s density on its accuracy. The field tests involved both isolated and hedgerow orchards, in which the measurements were taken manually and with the sensor. The canopy volume was estimated with a methodology consisting of revolving or extruding the canopy contour. The sensor showed high accuracy in the laboratory test, except for the measurements performed at 1.0 m distance, with 60 mm error (6%). Otherwise, error remained below 20 mm (1% relative error). The cone width depended on the target reflectivity. The accuracy decreased with the target density
    • …
    corecore