19,095 research outputs found

    Hot new directions for quasi-Monte Carlo research in step with applications

    Full text link
    This article provides an overview of some interfaces between the theory of quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC theoretical settings: first order QMC methods in the unit cube [0,1]s[0,1]^s and in Rs\mathbb{R}^s, and higher order QMC methods in the unit cube. One important feature is that their error bounds can be independent of the dimension ss under appropriate conditions on the function spaces. Another important feature is that good parameters for these QMC methods can be obtained by fast efficient algorithms even when ss is large. We outline three different applications and explain how they can tap into the different QMC theory. We also discuss three cost saving strategies that can be combined with QMC in these applications. Many of these recent QMC theory and methods are developed not in isolation, but in close connection with applications

    Computing Tails of Compound Distributions Using Direct Numerical Integration

    Full text link
    An efficient adaptive direct numerical integration (DNI) algorithm is developed for computing high quantiles and conditional Value at Risk (CVaR) of compound distributions using characteristic functions. A key innovation of the numerical scheme is an effective tail integration approximation that reduces the truncation errors significantly with little extra effort. High precision results of the 0.999 quantile and CVaR were obtained for compound losses with heavy tails and a very wide range of loss frequencies using the DNI, Fast Fourier Transform (FFT) and Monte Carlo (MC) methods. These results, particularly relevant to operational risk modelling, can serve as benchmarks for comparing different numerical methods. We found that the adaptive DNI can achieve high accuracy with relatively coarse grids. It is much faster than MC and competitive with FFT in computing high quantiles and CVaR of compound distributions in the case of moderate to high frequencies and heavy tails

    Calculation of aggregate loss distributions

    Full text link
    Estimation of the operational risk capital under the Loss Distribution Approach requires evaluation of aggregate (compound) loss distributions which is one of the classic problems in risk theory. Closed-form solutions are not available for the distributions typically used in operational risk. However with modern computer processing power, these distributions can be calculated virtually exactly using numerical methods. This paper reviews numerical algorithms that can be successfully used to calculate the aggregate loss distributions. In particular Monte Carlo, Panjer recursion and Fourier transformation methods are presented and compared. Also, several closed-form approximations based on moment matching and asymptotic result for heavy-tailed distributions are reviewed

    How Problematic are Internal Euro Area Differences?

    Get PDF
    currency; economic integration; EMU; Euro; European Central Bank; political economy

    Transfer Functions for Protein Signal Transduction: Application to a Model of Striatal Neural Plasticity

    Get PDF
    We present a novel formulation for biochemical reaction networks in the context of signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of 'source' species, which receive input signals. Signals are transmitted to all other species in the system (the 'target' species) with a specific delay and transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and recalled to build discrete dynamical models. By separating reaction time and concentration we can greatly simplify the model, circumventing typical problems of complex dynamical systems. The transfer function transformation can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant insight, while remaining an executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modules that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. We also found that overall interconnectedness depends on the magnitude of input, with high connectivity at low input and less connectivity at moderate to high input. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.Comment: 13 pages, 5 tables, 15 figure

    Particle Density Estimation with Grid-Projected Adaptive Kernels

    Full text link
    The reconstruction of smooth density fields from scattered data points is a procedure that has multiple applications in a variety of disciplines, including Lagrangian (particle-based) models of solute transport in fluids. In random walk particle tracking (RWPT) simulations, particle density is directly linked to solute concentrations, which is normally the main variable of interest, not just for visualization and post-processing of the results, but also for the computation of non-linear processes, such as chemical reactions. Previous works have shown the superiority of kernel density estimation (KDE) over other methods such as binning, in terms of its ability to accurately estimate the "true" particle density relying on a limited amount of information. Here, we develop a grid-projected KDE methodology to determine particle densities by applying kernel smoothing on a pilot binning; this may be seen as a "hybrid" approach between binning and KDE. The kernel bandwidth is optimized locally. Through simple implementation examples, we elucidate several appealing aspects of the proposed approach, including its computational efficiency and the possibility to account for typical boundary conditions, which would otherwise be cumbersome in conventional KDE
    • …
    corecore