63,728 research outputs found

    A Flexible Joint Longitudinal-Survival Model for Analysis of End-Stage Renal Disease Data

    Full text link
    We propose a flexible joint longitudinal-survival framework to examine the association between longitudinally collected biomarkers and a time-to-event endpoint. More specifically, we use our method for analyzing the survival outcome of end-stage renal disease patients with time-varying serum albumin measurements. Our proposed method is robust to common parametric assumptions in that it avoids explicit distributional assumptions on longitudinal measures and allows for subject-specific baseline hazard in the survival component. Fully joint estimation is performed to account for the uncertainty in the estimated longitudinal biomarkers included in the survival model

    Stochastic modelling of regional archaeomagnetic series

    Get PDF
    SUMMARY We report a new method to infer continuous time series of the declination, inclination and intensity of the magnetic field from archeomagnetic data. Adopting a Bayesian perspective, we need to specify a priori knowledge about the time evolution of the magnetic field. It consists in a time correlation function that we choose to be compatible with present knowledge about the geomagnetic time spectra. The results are presented as distributions of possible values for the declination, inclination or intensity. We find that the methodology can be adapted to account for the age uncertainties of archeological artefacts and we use Markov Chain Monte Carlo to explore the possible dates of observations. We apply the method to intensity datasets from Mari, Syria and to intensity and directional datasets from Paris, France. Our reconstructions display more rapid variations than previous studies and we find that the possible values of geomagnetic field elements are not necessarily normally distributed. Another output of the model is better age estimates of archeological artefacts

    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes

    Full text link
    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion

    L\'{e}vy flights in inhomogeneous environments

    Full text link
    We study the long time asymptotics of probability density functions (pdfs) of L\'{e}vy flights in different confining potentials. For that we use two models: Langevin - driven and (L\'{e}vy - Schr\"odinger) semigroup - driven dynamics. It turns out that the semigroup modeling provides much stronger confining properties than the standard Langevin one. Since contractive semigroups set a link between L\'{e}vy flights and fractional (pseudo-differential) Hamiltonian systems, we can use the latter to control the long - time asymptotics of the pertinent pdfs. To do so, we need to impose suitable restrictions upon the Hamiltonian and its potential. That provides verifiable criteria for an invariant pdf to be actually an asymptotic pdf of the semigroup-driven jump-type process. For computational and visualization purposes our observations are exemplified for the Cauchy driver and its response to external polynomial potentials (referring to L\'{e}vy oscillators), with respect to both dynamical mechanisms.Comment: Major revisio

    Modeling interest rate dynamics: an infinite-dimensional approach

    Full text link
    We present a family of models for the term structure of interest rates which describe the interest rate curve as a stochastic process in a Hilbert space. We start by decomposing the deformations of the term structure into the variations of the short rate, the long rate and the fluctuations of the curve around its average shape. This fluctuation is then described as a solution of a stochastic evolution equation in an infinite dimensional space. In the case where deformations are local in maturity, this equation reduces to a stochastic PDE, of which we give the simplest example. We discuss the properties of the solutions and show that they capture in a parsimonious manner the essential features of yield curve dynamics: imperfect correlation between maturities, mean reversion of interest rates and the structure of principal components of term structure deformations. Finally, we discuss calibration issues and show that the model parameters have a natural interpretation in terms of empirically observed quantities.Comment: Keywords: interest rates, stochastic PDE, term structure models, stochastic processes in Hilbert space. Other related works may be retrieved on http://www.eleves.ens.fr:8080/home/cont/papers.htm

    Truncation effects in superdiffusive front propagation with L\'evy flights

    Full text link
    A numerical and analytical study of the role of exponentially truncated L\'evy flights in the superdiffusive propagation of fronts in reaction-diffusion systems is presented. The study is based on a variation of the Fisher-Kolmogorov equation where the diffusion operator is replaced by a λ\lambda-truncated fractional derivative of order α\alpha where 1/λ1/\lambda is the characteristic truncation length scale. For λ=0\lambda=0 there is no truncation and fronts exhibit exponential acceleration and algebraic decaying tails. It is shown that for λ0\lambda \neq 0 this phenomenology prevails in the intermediate asymptotic regime (χt)1/αx1/λ(\chi t)^{1/\alpha} \ll x \ll 1/\lambda where χ\chi is the diffusion constant. Outside the intermediate asymptotic regime, i.e. for x>1/λx > 1/\lambda, the tail of the front exhibits the tempered decay ϕeλx/x(1+α)\phi \sim e^{-\lambda x}/x^{(1+\alpha)} , the acceleration is transient, and the front velocity, vLv_L, approaches the terminal speed v=(γλαχ)/λv_* = (\gamma - \lambda^\alpha \chi)/\lambda as tt\to \infty, where it is assumed that γ>λαχ\gamma > \lambda^\alpha \chi with γ\gamma denoting the growth rate of the reaction kinetics. However, the convergence of this process is algebraic, vLvα/(λt)v_L \sim v_* - \alpha /(\lambda t), which is very slow compared to the exponential convergence observed in the diffusive (Gaussian) case. An over-truncated regime in which the characteristic truncation length scale is shorter than the length scale of the decay of the initial condition, 1/ν1/\nu, is also identified. In this extreme regime, fronts exhibit exponential tails, ϕeνx\phi \sim e^{-\nu x}, and move at the constant velocity, v=(γλαχ)/νv=(\gamma - \lambda^\alpha \chi)/\nu.Comment: Accepted for publication in Phys. Rev. E (Feb. 2009

    Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS

    Full text link
    Over the last decade, kernel methods for nonlinear processing have successfully been used in the machine learning community. The primary mathematical tool employed in these methods is the notion of the Reproducing Kernel Hilbert Space. However, so far, the emphasis has been on batch techniques. It is only recently, that online techniques have been considered in the context of adaptive signal processing tasks. Moreover, these efforts have only been focussed on real valued data sequences. To the best of our knowledge, no adaptive kernel-based strategy has been developed, so far, for complex valued signals. Furthermore, although the real reproducing kernels are used in an increasing number of machine learning problems, complex kernels have not, yet, been used, in spite of their potential interest in applications that deal with complex signals, with Communications being a typical example. In this paper, we present a general framework to attack the problem of adaptive filtering of complex signals, using either real reproducing kernels, taking advantage of a technique called \textit{complexification} of real RKHSs, or complex reproducing kernels, highlighting the use of the complex gaussian kernel. In order to derive gradients of operators that need to be defined on the associated complex RKHSs, we employ the powerful tool of Wirtinger's Calculus, which has recently attracted attention in the signal processing community. To this end, in this paper, the notion of Wirtinger's calculus is extended, for the first time, to include complex RKHSs and use it to derive several realizations of the Complex Kernel Least-Mean-Square (CKLMS) algorithm. Experiments verify that the CKLMS offers significant performance improvements over several linear and nonlinear algorithms, when dealing with nonlinearities.Comment: 15 pages (double column), preprint of article accepted in IEEE Trans. Sig. Pro
    corecore