2,567 research outputs found

    An Unbiased Itô Type Stochastic Representation for Transport PDEs: A Toy Example

    Get PDF

    Importance sampling for McKean-Vlasov SDEs

    Get PDF
    This paper deals with the Monte-Carlo methods for evaluating expectations of functionals of solutions to McKean-Vlasov Stochastic Differential Equations (MV-SDE) with drifts of super-linear growth. We assume that the MV-SDE is approximated in the standard manner by means of an interacting particle system and propose two importance sampling (IS) techniques to reduce the variance of the resulting Monte Carlo estimator. In the \emph{complete measure change} approach, the IS measure change is applied simultaneously in the coefficients and in the expectation to be evaluated. In the \emph{decoupling} approach we first estimate the law of the solution in a first set of simulations without measure change and then perform a second set of simulations under the importance sampling measure using the approximate solution law computed in the first step. For both approaches, we use large deviations techniques to identify an optimisation problem for the candidate measure change. The decoupling approach yields a far simpler optimisation problem than the complete measure change, however, we can reduce the complexity of the complete measure change through some symmetry arguments. We implement both algorithms for two examples coming from the Kuramoto model from statistical physics and show that the variance of the importance sampling schemes is up to 3 orders of magnitude smaller than that of the standard Monte Carlo. The computational cost is approximately the same as for standard Monte Carlo for the complete measure change and only increases by a factor of 2--3 for the decoupled approach. We also estimate the propagation of chaos error and find that this is dominated by the statistical error by one order of magnitude.Comment: 29 pages, 2 Table

    Properties and advances of probabilistic and statistical algorithms with applications in finance

    Get PDF
    This thesis is concerned with the construction and enhancement of algorithms involving probability and statistics. The main motivation for these are problems that appear in finance and more generally in applied science. We consider three distinct areas, namely, credit risk modelling, numerics for McKean Vlasov stochastic differential equations and stochastic representations of Partial Differential Equations (PDEs), therefore the thesis is split into three parts. Firstly, we consider the problem of estimating a continuous time Markov chain (CTMC) generator from discrete time observations, which is essentially a missing data problem in statistics. These generators give rise to transition probabilities (in particular probabilities of default) over any time horizon, hence the estimation of such generators is a key problem in the world of banking, where the regulator requires banks to calculate risk over different time horizons. For this particular problem several algorithms have been proposed, however, through a combination of theoretical and numerical results we show the Expectation Maximisation (EM) algorithm to be the superior choice. Furthermore we derive closed form expressions for the associated Wald confidence intervals (error) estimated by the EM algorithm. Previous attempts to calculate such intervals relied on numerical schemes which were slower and less stable. We further provide a closed form expression (via the Delta method) to transfer these errors to the level of the transition probabilities, which are more intuitive. Although one can establish more precise mathematical results with the Markov assumption, there is empirical evidence suggesting this assumption is not valid. We finish this part by carrying out empirical research on non-Markov phenomena and propose a model to capture the so-called rating momentum. This model has many appealing features and is a natural extension to the Markov set up. The second part is based on McKean Vlasov Stochastic Differential Equations (MV-SDEs), these Stochastic Differential Equations (SDEs) arise from looking at the limit, as the number of weakly interacting particles (e.g. gas particles) tends to infinity. The resulting SDE has coefficients which can depend on its own law, making them theoretically more involved. Although MV-SDEs arise from statistical physics, there has been an explosion in interest recently to use MV-SDEs in models for economics. We firstly derive an explicit approximation scheme for MV-SDEs with one-sided Lipschitz growth in the drift. Such a condition was observed to be an issue for standard SDEs and required more sophisticated schemes. There are implicit and explicit schemes one can use and we develop both types in the setting of MV-SDEs. Another main issue for MVSDEs is, due to the dependency on their own law they are extremely expensive to simulate compared to standard SDEs, hence techniques to improve computational cost are in demand. The final result in this part is to develop an importance sampling algorithm for MV-SDEs, where our measure change is obtained through the theory of large deviation principles. Although importance sampling results for standard SDEs are reasonably well understood, there are several difficulties one must overcome to apply a good importance sampling change of measure in this setting. The importance sampling is used here as a variance reduction technique although our results hint that one may be able to use it to reduce propagation of chaos error as well. Finally we consider stochastic algorithms to solve PDEs. It is known one can achieve numerical advantages by using probabilistic methods to solve PDEs, through the so-called probabilistic domain decomposition method. The main result of this part is to present an unbiased stochastic representation for a first order PDE, based on the theory of branching diffusions and regime switching. This is a very interesting result since previously (Itô based) stochastic representations only applied to second order PDEs. There are multiple issues one must overcome in order to obtain an algorithm that is numerically stable and solves such a PDE. We conclude by showing the algorithm’s potential on a more general first order PDE

    Robust and Consistent Estimation of Generators in Credit Risk

    Get PDF
    Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios or the computation of banking Capital Requirements (e.g. the new IFRS 9 regulation), one needs to compute the TPM and probabilities of default over a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by Bladt and Sorensen (2005) for a CTMC with an absorbing state for such estimation. This work's contribution is threefold. Firstly, we provide directly computable closed-form expressions for quantities appearing in the EM algorithm and associated information matrix, allowing to easily approximate confidence intervals. Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under very weak conditions (for the TPM problem). Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), outperforms other known algorithms in several metrics, in particular, with much less overestimation of probabilities of default in higher ratings than other statistical algorithms.Comment: 29 pages, 7 Figures, 9 table

    The alpha-synuclein 5'untranslated region targeted translation blockers: anti-alpha synuclein efficacy of cardiac glycosides and Posiphen

    Get PDF
    Increased brain α-synuclein (SNCA) protein expression resulting from gene duplication and triplication can cause a familial form of Parkinson's disease (PD). Dopaminergic neurons exhibit elevated iron levels that can accelerate toxic SNCA fibril formation. Examinations of human post mortem brain have shown that while mRNA levels for SNCA in PD have been shown to be either unchanged or decreased with respect to healthy controls, higher levels of insoluble protein occurs during PD progression. We show evidence that SNCA can be regulated via the 5'untranslated region (5'UTR) of its transcript, which we modeled to fold into a unique RNA stem loop with a CAGUGN apical loop similar to that encoded in the canonical iron-responsive element (IRE) of L- and H-ferritin mRNAs. The SNCA IRE-like stem loop spans the two exons that encode its 5'UTR, whereas, by contrast, the H-ferritin 5'UTR is encoded by a single first exon. We screened a library of 720 natural products (NPs) for their capacity to inhibit SNCA 5'UTR driven luciferase expression. This screen identified several classes of NPs, including the plant cardiac glycosides, mycophenolic acid (an immunosuppressant and Fe chelator), and, additionally, posiphen was identified to repress SNCA 5'UTR conferred translation. Western blotting confirmed that Posiphen and the cardiac glycoside, strophanthidine, selectively blocked SNCA expression (~1 μM IC(50)) in neural cells. For Posiphen this inhibition was accelerated in the presence of iron, thus providing a known APP-directed lead with potential for use as a SNCA blocker for PD therapy. These are candidate drugs with the potential to limit toxic SNCA expression in the brains of PD patients and animal models in vivo

    Right Here Right Now (RHRN) pilot study: testing a method of near-real-time data collection on the social determinants of health

    Get PDF
    Background: Informing policy and practice with up-to-date evidence on the social determinants of health is an ongoing challenge. One limitation of traditional approaches is the time-lag between identification of a policy or practice need and availability of results. The Right Here Right Now (RHRN) study piloted a near-real-time data-collection process to investigate whether this gap could be bridged. Methods: A website was developed to facilitate the issue of questions, data capture and presentation of findings. Respondents were recruited using two distinct methods – a clustered random probability sample, and a quota sample from street stalls. Weekly four-part questions were issued by email, Short Messaging Service (SMS or text) or post. Quantitative data were descriptively summarised, qualitative data thematically analysed, and a summary report circulated two weeks after each question was issued. The pilot spanned 26 weeks. Results: It proved possible to recruit and retain a panel of respondents providing quantitative and qualitative data on a range of issues. The samples were subject to similar recruitment and response biases as more traditional data-collection approaches. Participants valued the potential to influence change, and stakeholders were enthusiastic about the findings generated, despite reservations about the lack of sample representativeness. Stakeholders acknowledged that decision-making processes are not flexible enough to respond to weekly evidence. Conclusion: RHRN produced a process for collecting near-real-time data for policy-relevant topics, although obtaining and maintaining representative samples was problematic. Adaptations were identified to inform a more sustainable model of near-real-time data collection and dissemination in the future

    Hybrid PDE solver for data-driven problems and modern Branching

    Get PDF
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    The geometrical pattern of the evolution of cooperation in the Spatial Prisoner's Dilemma: an intra-group model

    Full text link
    The Prisoner's Dilemma (PD) deals with the cooperation/defection conflict between two agents. The agents are represented by a cell of L×LL \times L square lattice. The agents are initially randomly distributed according to a certain proportion ρc(0)\rho_c(0) of cooperators. Each agent does not have memory of previous behaviors and plays the PD with eight nearest neighbors and then copies the behavior of who had the greatest payoff for next generation. This system shows that, when the conflict is established, cooperation among agents may emerge even for reasonably high defection temptation values. Contrary to previous studies, which treat mean inter-group interaction, here a model where the agents are not allowed to self-interact, representing intra-group interaction, is proposed. This leads to short time and asymptotic behaviors similar to the one found when self-interaction is considered. Nevertheless, the intermediate behavior is different, with no possible data collapse since oscillations are present. Also, the fluctuations are much smaller in the intra-group model. The geometrical configurations of cooperative clusters are distinct and explain the ρc(t)\rho_c(t) differences between inter and intra-group models. The boundary conditions do not affect the results.Comment: 4 pages, 4 figure

    Diversity and distribution of epiphytic bryophytes on Bramley's Seedling trees in East of England apple orchards.

    Get PDF
    © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)Epiphytic bryophytes on apple trees were investigated in relation to a selection of tree characteristics. Management of orchard trees for fruit production affects the habitats available for colonisation and growth of epiphytes on trunks and branches. Bryophytes recorded on Bramley’s Seedling apple trees in orchards in Hertfordshire and Cambridgeshire showed a high level of similarity in species composition between the orchards. The similarity between orchards was, however, much reduced when relative species cover on the trees was taken into account. Twenty three species were recorded on the 71 trees sampled for detailed investigation. Tree structure, as determined by management, explained about 10% of the observed variation in bryophyte cover. Within that, trunk girth and distance to nearest neighbouring orchard trees were the most important factors. This information is of value to orchard managers aiming to become more proactive in managing their habitats for the benefit of biodiversityPeer reviewedFinal Published versio
    corecore