819 research outputs found

    Stochastic cosmic ray sources and the TeV break in the all-electron spectrum

    Full text link
    Despite significant progress over more than 100 years, no accelerator has been unambiguously identified as the source of the locally measured flux of cosmic rays. High-energy electrons and positrons are of particular importance in the search for nearby sources as radiative energy losses constrain their propagation to distances of about 1 kpc around 1 TeV. At the highest energies, the spectrum is therefore dominated and shaped by only a few sources whose properties can be inferred from the fine structure of the spectrum at energies currently accessed by experiments like AMS-02, CALET, DAMPE, Fermi-LAT, H.E.S.S. and ISS-CREAM. We present a stochastic model of the Galactic all-electron flux and evaluate its compatibility with the measurement recently presented by the H.E.S.S. collaboration. To this end, we have MC generated a large sample of the all-electron flux from an ensemble of random distributions of sources. We confirm the non-Gaussian nature of the probability density of fluxes at individual energies previously reported in analytical computations. For the first time, we also consider the correlations between the fluxes at different energies, treating the binned spectrum as a random vector and parametrising its joint distribution with the help of a pair-copula construction. We show that the spectral break observed in the all-electron spectrum by H.E.S.S. and DAMPE is statistically compatible with a distribution of astrophysical sources like supernova remnants or pulsars, but requires a rate smaller than the canonical supernova rate. This important result provides an astrophysical interpretation of the spectrum at TeV energies and allows differentiating astrophysical source models from exotic explanations, like dark matter annihilation. We also critically assess the reliability of using catalogues of known sources to model the electron-positron flux.Comment: 30 pages, 12 figures; extended discussion; accepted for publication in JCA

    A General Framework for Observation Driven Time-Varying Parameter Models

    Get PDF
    We propose a new class of observation driven time series models that we refer to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled likelihood score. This provides a unified and consistent framework for introducing time-varying parameters in a wide class of non-linear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity and single source of error models. In addition, the GAS specification gives rise to a wide range of new observation driven models. Examples include non-linear regression models with time-varying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with time-varying parameters and pooling restrictions, new models for time-varying copula functions and models for time-varying higher order moments. We study the properties of GAS models and provide several non-trivial examples of their application.dynamic models, time-varying parameters, non-linearity, exponential family, marked point processes, copulas

    Risk capital allocation and risk quantification in insurance companies

    Get PDF
    The objective of this thesis is to investigate risk capital allocation methods in detail for both non-life and life insurance business. In non-life insurance business loss models are generally linear with respect to losses of business-lines. However, in life insurance loss models are not generally a linear function of factor risks, i.e. the interest-rate factor, mortality rate factor, etc. In the first part of the thesis, we present the existing allocation methods and discuss their advantages and disadvantages. In a comprehensive simulation study we examine the allocations sensitivity to different allocation methods, different risk measures and different risk models in a non-life insurance business. We also show the possible usage of the Euclidean distance measure and rank correlation coefficients for the comparison of allocation methods. In the second part, we investigate the factor risk contribution theory and examine its application under a life annuity business. We provide two approximations that enable us to apply risk capital allocation methods directly to annuity values in order to measure factor risk contributions. We examine factor risk contributions for annuities with different terms to maturity and the annuities payable at different times in future. We also analyse the factor risk contributions under the extreme scenarios for the factor risks

    A dynamic copula approach to recovering the index implied volatility skew

    Get PDF
    Equity index implied volatility functions are known to be excessively skewed in comparison with implied volatility at the single stock level. We study this stylized fact for the case of a major German stock index, the DAX, by recovering index implied volatility from simulating the 30 dimensional return system of all DAX constituents. Option prices are computed after risk neutralization of the multivariate process which is estimated under the physical probability measure. The multivariate models belong to the class of copula asymmetric dynamic conditional correlation models. We show that moderate tail-dependence coupled with asymmetric correlation response to negative news is essential to explain the index implied volatility skew. Standard dynamic correlation models with zero tail-dependence fail to generate a sufficiently steep implied volatility skew.Copula Dynamic Conditional Correlation, Basket Options, Multivariate GARCH Models, Change of Measure, Esscher Transform

    Dynamic hedging of portfolio credit derivatives

    Get PDF
    We compare the performance of various hedging strategies for index collateralized debt obligation (CDO) tranches across a variety of models and hedging methods during the recent credit crisis. Our empirical analysis shows evidence for market incompleteness: a large proportion of risk in the CDO tranches appears to be unhedgeable. We also show that, unlike what is commonly assumed, dynamic models do not necessarily perform better than static models, nor do high-dimensional bottom-up models perform better than simpler top-down models. When it comes to hedging, top-down and regression-based hedging with the index provide significantly better results during the credit crisis than bottom-up hedging with single-name credit default swap (CDS) contracts. Our empirical study also reveals that while significantly large moves—“jumps”—do occur in CDS, index, and tranche spreads, these jumps do not necessarily occur on the default dates of index constituents, an observation which shows the insufficiency of some recently proposed portfolio credit risk models.hedging, credit default swaps, portfolio credit derivatives, index default swaps, collateralized debt obligations, portfolio credit risk models, default contagion, spread risk, sensitivity-based hedging, variance minimization

    Variance-based reliability sensitivity with dependent inputs using failure samples

    Full text link
    Reliability sensitivity analysis is concerned with measuring the influence of a system's uncertain input parameters on its probability of failure. Statistically dependent inputs present a challenge in both computing and interpreting these sensitivity indices; such dependencies require discerning between variable interactions produced by the probabilistic model describing the system inputs and the computational model describing the system itself. To accomplish such a separation of effects in the context of reliability sensitivity analysis we extend on an idea originally proposed by Mara and Tarantola (2012) for model outputs unrelated to rare events. We compute the independent (influence via computational model) and full (influence via both computational and probabilistic model) contributions of all inputs to the variance of the indicator function of the rare event. We compute this full set of variance-based sensitivity indices of the rare event indicator using a single set of failure samples. This is possible by considering dd different hierarchically structured isoprobabilistic transformations of this set of failure samples from the original dd-dimensional space of dependent inputs to standard-normal space. The approach facilitates computing the full set of variance-based reliability sensitivity indices with a single set of failure samples obtained as the byproduct of a single run of a sample-based rare event estimation method. That is, no additional evaluations of the computational model are required. We demonstrate the approach on a test function and two engineering problems
    corecore