21,476 research outputs found

    Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    Get PDF
    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice Kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. The algorithms can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). The proposed parallel algorithms are controlled-error approximations of kinetic Monte Carlo algorithms, departing from the predominant paradigm of creating parallel KMC algorithms with exactly the same master equation as the serial one. Our methodology relies on a spatial decomposition of the Markov operator underlying the KMC algorithm into a hierarchy of operators corresponding to the processors' structure in the parallel architecture. Based on this operator decomposition, we formulate Fractional Step Approximation schemes by employing the Trotter Theorem and its random variants; these schemes, (a) determine the communication schedule} between processors, and (b) are run independently on each processor through a serial KMC simulation, called a kernel, on each fractional step time-window. Furthermore, the proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules.Comment: 34 pages, 9 figure

    Fused kernel-spline smoothing for repeatedly measured outcomes in a generalized partially linear model with functional single index

    Full text link
    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show a different convergence rate for each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work, even in the independent data case.Comment: Published at http://dx.doi.org/10.1214/15-AOS1330 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Panel Data Econometrics in R: The plm Package

    Get PDF
    Panel data econometrics is obviously one of the main fields in the profession, but most of the models used are difficult to estimate with R. plm is a package for R which intends to make the estimation of linear panel models straightforward. plm provides functions to estimate a wide variety of models and to make (robust) inference.

    Residual Risk Revisited

    Get PDF
    The Capital Asset Pricing Model in conjunction with the usual market model assumptions implies that well-diversified portfolios should be mean variance efficient and ,hence, betas computed with respect to such indices should completely explain expected returns on individual assets. In fact, there is now a large body of evidence indicating that the market proxies usually employed in empirical tests are not mean variance efficient. Moreover, there is considerable evidence suggesting that these rejections are in part a consequence of the presence of omitted risk factors which are associated with nonzero risk premia in the residuals from the single index market model. Consequently, the idiosyncratic variances from the one factor model should partially reflect exposure to these omitted sources of systematic risk and,hence, should help explain expected returns. There are two plausible explanations for the inability to obtain statistically reliable estimates of a linear residual risk effect in the previous literature:(1) nonlinearity of the residual risk effect and (2) the inadequacy of the statistical procedures employed to measure it.The results presented below indicate that the econometric methods employed previously are the culprits. Pronounced residual risk effects are found in the whole fifty-four year sample and in numerous five year subperiods as well when weighted least squares estimation is coupled with the appropriate corrections for sampling error in the betas and residual variances of individual security returns. In addition, the evidence suggests that it is important to take account of the nonnormality and heteroskedasticity of security returns when making the appropriate measurement error corrections in cross-sectional regressions. Finally, the results are sensitive to the specification of the model for expected returns.

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Glosarium Matematika

    Get PDF

    Once Again, is Openness Good for Growth?

    Get PDF
    Rodriguez and Rodrik (2000) argue that the relation between openness and growth is still an open question. One of the main problems in the assessment of the effect is the endogeneity of the relation. In order to address this issue, this paper applies the identification through heteroskedasticity methodology to estimate the effect of openness on growth while properly controlling for the effect of growth on openness. The results suggest that openness would have a positive effect on growth, although small. This result stands, despite the equally robust effect from growth to openness.

    Decorrelation of Neutral Vector Variables: Theory and Applications

    Full text link
    In this paper, we propose novel strategies for neutral vector variable decorrelation. Two fundamental invertible transformations, namely serial nonlinear transformation and parallel nonlinear transformation, are proposed to carry out the decorrelation. For a neutral vector variable, which is not multivariate Gaussian distributed, the conventional principal component analysis (PCA) cannot yield mutually independent scalar variables. With the two proposed transformations, a highly negatively correlated neutral vector can be transformed to a set of mutually independent scalar variables with the same degrees of freedom. We also evaluate the decorrelation performances for the vectors generated from a single Dirichlet distribution and a mixture of Dirichlet distributions. The mutual independence is verified with the distance correlation measurement. The advantages of the proposed decorrelation strategies are intensively studied and demonstrated with synthesized data and practical application evaluations
    corecore