539 research outputs found

    Fast stochastic simulation of biochemical reaction systems by\ud alternative formulations of the Chemical Langevin Equation

    Get PDF
    The Chemical Langevin Equation (CLE), which is a stochastic differential equation (SDE) driven by a multidimensional Wiener process, acts as a bridge between the discrete Stochastic Simulation Algorithm and the deterministic reaction rate equation when simulating (bio)chemical kinetics. The CLE model is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. The contribution of this work is that we observe and explore that the CLE is not a single equation, but a parametric family of equations, all of which give the same finite-dimensional distribution of the variables. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation. On the practical side, we show that in the case where there are m1 pairs of reversible reactions and m2 irreversible reactions only m1+m2 Wiener processes are required in the formulation of the CLE, whereas the standard approach uses 2m1 + m2. We illustrate our findings by considering alternative formulations of the CLE for a\ud HERG ion channel model and the Goldbeter–Koshland switch. We show that there are considerable computational savings when using our insights

    Tailoring a coherent control solution landscape by linear transforms of spectral phase basis

    Get PDF
    Finding an optimal phase pattern in a multidimensional solution landscape becomes easier and faster if local optima are suppressed and contour lines are tailored towards closed convex patterns. Using wideband second harmonic generation as a coherent control test case, we show that a linear combination of spectral phase basis functions can result in such improvements and also in separable phase terms, each of which can be found independently. The improved shapes are attributed to a suppressed nonlinear shear, changing the relative orientation of contour lines. The first order approximation of the process shows a simple relation between input and output phase profiles, useful for pulse shaping at ultraviolet wavelengths

    Bayesian Learning of Coupled Biogeochemical-Physical Models

    Full text link
    Predictive dynamical models for marine ecosystems are used for a variety of needs. Due to sparse measurements and limited understanding of the myriad of ocean processes, there is however significant uncertainty. There is model uncertainty in the parameter values, functional forms with diverse parameterizations, level of complexity needed, and thus in the state fields. We develop a Bayesian model learning methodology that allows interpolation in the space of candidate models and discovery of new models from noisy, sparse, and indirect observations, all while estimating state fields and parameter values, as well as the joint PDFs of all learned quantities. We address the challenges of high-dimensional and multidisciplinary dynamics governed by PDEs by using state augmentation and the computationally efficient GMM-DO filter. Our innovations include stochastic formulation and complexity parameters to unify candidate models into a single general model as well as stochastic expansion parameters within piecewise function approximations to generate dense candidate model spaces. These innovations allow handling many compatible and embedded candidate models, possibly none of which are accurate, and learning elusive unknown functional forms. Our new methodology is generalizable, interpretable, and extrapolates out of the space of models to discover new ones. We perform a series of twin experiments based on flows past a ridge coupled with three-to-five component ecosystem models, including flows with chaotic advection. The probabilities of known, uncertain, and unknown model formulations, and of state fields and parameters, are updated jointly using Bayes' law. Non-Gaussian statistics, ambiguity, and biases are captured. The parameter values and model formulations that best explain the data are identified. When observations are sufficiently informative, model complexity and functions are discovered.Comment: 45 pages; 18 figures; 2 table

    Parametric and Nonparametric Volatility Measurement

    Get PDF
    Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade. This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures. We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time. The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility. In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s). The nonparametric procedures are generally free from such functional form assumptions and hence afford estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases). The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.

    Parametric and Nonparametric Volatility Measurement

    Get PDF
    Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade. This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures. We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time. The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility. In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s). The nonparametric procedures are generally free from such functional form assumptions and hence afford estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases). The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.

    Portfolio efficiency and discount factor bounds with conditioning information: a unified approach

    Get PDF
    In this paper, we develop a unified framework for the study of mean-variance efficiency and discount factor bounds in the presence of conditioning information. We extend the framework of Hansen and Richard (1987) to obtain new characterizations of the efficient portfolio frontier and variance bounds on discount factors, as functions of the conditioning information. We introduce a covariance-orthogonal representation of the asset return space, which allows us to derive several new results, and provide a portfolio-based interpretation of existing results. Our analysis is inspired by, and extends the recent work of Ferson and Siegel (2001,2002), and Bekaert and Liu (2004). Our results have several important applications in empirical asset pricing, such as the construction of portfolio-based tests of asset pricing models, conditional measures of portfolio performance, and tests of return predictability

    Macroeconomics and the reality of mixed frequency data

    Get PDF
    Many time series are sampled at different frequencies. When we study co-movements between such series we usually analyze the joint process sampled at a common low frequency. This has consequences in terms of potentially mis-specifying the co-movements and hence the analysis of impulse response functions—a commonly used tool for economic policy analysis. We introduce a class of mixed frequency VAR models that allows us to measure the impact of high frequency data on low frequency and vice versa. Our approach does not rely on latent processes/shocks representations. As a consequence, the mixed frequency VAR is an alternative to commonly used state space models for mixed frequency data. State space models are parameter-driven whereas mixed frequency VAR models are observation-driven models as they are formulated exclusively in terms of observable data and do not involve latent processes as well as shocks and thus avoid the need to formulate measurement equations, filtering, etc. We also propose various parsimonious parameterizations, in part inspired by recent work on MIDAS regressions. We also explicitly characterize the mis-specification of a traditional common low frequency VAR and its implied mis-specified impulse response functions. The class of mixed frequency VAR models can also characterize the timing of information releases for a mixture of sampling frequencies and the real-time updating of predictions caused by the flow of high frequency information. Various estimation procedures for mixed frequency VAR models are also proposed, both classical and Bayesian. Numerical and empirical examples quantify the consequences of ignoring mixed frequency data

    An analysis of parton distributions in a pion with B\'ezier parametrizations

    Full text link
    We explore the role of parametrizations for nonperturbative QCD functions in global analyses, with a specific application to extending a phenomenological analysis of the parton distribution functions (PDFs) in the charged pion realized in the xFitter fitting framework. The parametrization dependence of PDFs in our pion fits substantially enlarges the uncertainties from the experimental sources estimated in the previous analyses. We systematically explore the parametrization dependence by employing a novel technique to automate generation of polynomial parametrizations for PDFs that makes use of B\'ezier curves. This technique is implemented in a C++ module Fant\^omas that is included in the xFitter program. Our analysis reveals that the sea and gluon distributions in the pion are not well disentangled, even when considering measurements in leading-neutron deep inelastic scattering. For example, the pion PDF solutions with a vanishing gluon and large quark sea are still experimentally allowed, which elevates the importance of ongoing lattice and nonperturbative QCD calculations, together with the planned pion scattering experiments, for conclusive studies of the pion structure.Comment: 22 pages, 15 figure

    Volatility forecasting

    Get PDF
    Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1
    corecore