60,113 research outputs found
Resistance scaling at the Kosterlitz-Thouless transition
We study the linear resistance at the Kosterlitz-Thouless transition by Monte
Carlo simulation of vortex dynamics. Finite size scaling analysis of our data
show excellent agreement with scaling properties of the Kosterlitz-Thouless
transition. We also compare our results for the linear resistance with
experiments. By adjusting the vortex chemical potential to an optimum value,
the resistance at temperatures above the transition temperature agrees well
with experiments over many decades.Comment: 7 pages, 4 postscript figures included, LATEX, KTH-CMT-94-00
Asian Sovereign Debt and Country Risk
This paper analyzes systematic risk of sovereign bonds in four East Asian countries: China, Malaysia, Philippines, and Thailand. A bivariate stochastic volatility model that allows for time-varying correlation is estimated with Markov Chain Monte Carlo simulation. The volatilities and correlation are then used to calculate the time-varying betas. The results show that country-specific systematic risk in Asian sovereign bonds varies over time. When adjusting for inherent exchange rate risk, the pattern of systematic risk is similar, even though the level is generally lower. The findings have important implications for international portfolio managers that invest in emerging sovereign bonds and those who need benchmark instruments to analyze risk in assets such as corporate bonds in the emerging Asian financial markets.Asia; sovereign bonds; systematic risk; stochastic volatility; Markov Chain Monte Carlo
Proxy simulation schemes using likelihood ratio weighted Monte Carlo for generic robust Monte-Carlo sensitivities and high accuracy drift approximation (with applications to the LIBOR Market Model)
We consider a generic framework for generating likelihood ratio weighted Monte Carlo simulation paths, where we use one simulation scheme K° (proxy scheme) to generate realizations and then reinterpret them as realizations of another scheme K* (target scheme) by adjusting measure (via likelihood ratio) to match the distribution of K° such that E( f(K*) | F_t ) = E( f(K°) w | F_t ). This is done numerically in every time step, on every path. This makes the approach independent of the product (the function f) and even of the model, it only depends on the numerical scheme. The approach is essentially a numerical version of the likelihood ratio method [Broadie & Glasserman, 1996] and Malliavin's Calculus [Fournie et al., 1999; Malliavin, 1997] reconsidered on the level of the discrete numerical simulation scheme. Since the numerical scheme represents a time discrete stochastic process sampled on a discrete probability space the essence of the method may be motivated without a deeper mathematical understanding of the time continuous theory (e.g. Malliavin's Calculus). The framework is completely generic and may be used for high accuracy drift approximations and the robust calculation of partial derivatives of expectations w.r.t. model parameters (i.e. sensitivities, aka. Greeks) by applying finite differences by reevaluating the expectation with a model with shifted parameters. We present numerical results using a Monte-Carlo simulation of the LIBOR Market Model for benchmarking.Monte-Carlo, Likelihood Ratio, Malliavin Calculus, Sensitivities, Greeks
Radiative equilibrium in Monte Carlo radiative transfer using frequency distribution adjustment
The Monte Carlo method is a powerful tool for performing radiative
equilibrium calculations, even in complex geometries. The main drawback of the
standard Monte Carlo radiative equilibrium methods is that they require
iteration, which makes them numerically very demanding. Bjorkman & Wood
recently proposed a frequency distribution adjustment scheme, which allows
radiative equilibrium Monte Carlo calculations to be performed without
iteration, by choosing the frequency of each re-emitted photon such that it
corrects for the incorrect spectrum of the previously re-emitted photons.
Although the method appears to yield correct results, we argue that its
theoretical basis is not completely transparent, and that it is not completely
clear whether this technique is an exact rigorous method, or whether it is just
a good and convenient approximation. We critically study the general problem of
how an already sampled distribution can be adjusted to a new distribution by
adding data points sampled from an adjustment distribution. We show that this
adjustment is not always possible, and that it depends on the shape of the
original and desired distributions, as well as on the relative number of data
points that can be added. Applying this theorem to radiative equilibrium Monte
Carlo calculations, we provide a firm theoretical basis for the frequency
distribution adjustment method of Bjorkman & Wood, and we demonstrate that this
method provides the correct frequency distribution through the additional
requirement of radiative equilibrium. We discuss the advantages and limitations
of this approach, and show that it can easily be combined with the presence of
additional heating sources and the concept of photon weighting. However, the
method may fail if small dust grains are included... (abridged)Comment: 17 pages, 2 figures, accepted for publication in New Astronom
A Study of the Water Cherenkov Calorimeter
The novel idea of water Cherenkov calorimeter made of water tanks as the next
generation neutrino detector for nu factories and nu beams is investigated. A
water tank prototype with a dimension of 1*1*13m^3 is constructed, its
performance is studied and compared with a GEANT4 based Monte Carlo simulation.
By using measured parameters of the water tank, including the light collection
efficiency, attenuation length, angular dependent response etc, a detailed
Monte Carlo simulation demonstrates that the detector performance is excellent
for identifying neutrino charged current events while rejecting neutral current
and wrong-flavor backgrounds.Comment: 19 pages, 14 figures, submitted to NI
Empirical and Simulated Adjustments of Composite Likelihood Ratio Statistics
Composite likelihood inference has gained much popularity thanks to its
computational manageability and its theoretical properties. Unfortunately,
performing composite likelihood ratio tests is inconvenient because of their
awkward asymptotic distribution. There are many proposals for adjusting
composite likelihood ratio tests in order to recover an asymptotic chi square
distribution, but they all depend on the sensitivity and variability matrices.
The same is true for Wald-type and score-type counterparts. In realistic
applications sensitivity and variability matrices usually need to be estimated,
but there are no comparisons of the performance of composite likelihood based
statistics in such an instance. A comparison of the accuracy of inference based
on the statistics considering two methods typically employed for estimation of
sensitivity and variability matrices, namely an empirical method that exploits
independent observations, and Monte Carlo simulation, is performed. The results
in two examples involving the pairwise likelihood show that a very large number
of independent observations should be available in order to obtain accurate
coverages using empirical estimation, while limited simulation from the full
model provides accurate results regardless of the availability of independent
observations.Comment: 15 page
Monte Carlo Methods for Equilibrium and Nonequilibrium Problems in Interfacial Electrochemistry
We present a tutorial discussion of Monte Carlo methods for equilibrium and
nonequilibrium problems in interfacial electrochemistry. The discussion is
illustrated with results from simulations of three specific systems: bromine
adsorption on silver (100), underpotential deposition of copper on gold (111),
and electrodeposition of urea on platinum (100).Comment: RevTex, 14 pages, 8 figures. To appear in book _Interfacial
Electrochemisty
- …