22,288 research outputs found

    Integral curves of noisy vector fields and statistical problems in diffusion tensor imaging: nonparametric kernel estimation and hypotheses testing

    Full text link
    Let vv be a vector field in a bounded open set G⊂RdG\subset {\mathbb {R}}^d. Suppose that vv is observed with a random noise at random points Xi,i=1,...,n,X_i, i=1,...,n, that are independent and uniformly distributed in G.G. The problem is to estimate the integral curve of the differential equation dx(t)dt=v(x(t)),t≥0,x(0)=x0∈G,\frac{dx(t)}{dt}=v(x(t)),\qquad t\geq 0,x(0)=x_0\in G, starting at a given point x(0)=x0∈Gx(0)=x_0\in G and to develop statistical tests for the hypothesis that the integral curve reaches a specified set Γ⊂G.\Gamma\subset G. We develop an estimation procedure based on a Nadaraya--Watson type kernel regression estimator, show the asymptotic normality of the estimated integral curve and derive differential and integral equations for the mean and covariance function of the limit Gaussian process. This provides a method of tracking not only the integral curve, but also the covariance matrix of its estimate. We also study the asymptotic distribution of the squared minimal distance from the integral curve to a smooth enough surface Γ⊂G\Gamma\subset G. Building upon this, we develop testing procedures for the hypothesis that the integral curve reaches Γ\Gamma. The problems of this nature are of interest in diffusion tensor imaging, a brain imaging technique based on measuring the diffusion tensor at discrete locations in the cerebral white matter, where the diffusion of water molecules is typically anisotropic. The diffusion tensor data is used to estimate the dominant orientations of the diffusion and to track white matter fibers from the initial location following these orientations. Our approach brings more rigorous statistical tools to the analysis of this problem providing, in particular, hypothesis testing procedures that might be useful in the study of axonal connectivity of the white matter.Comment: Published in at http://dx.doi.org/10.1214/009053607000000073 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    H-matrix accelerated second moment analysis for potentials with rough correlation

    Get PDF
    We consider the efficient solution of partial differential equationsfor strongly elliptic operators with constant coefficients and stochastic Dirichlet data by the boundary integral equation method. The computation of the solution's two-point correlation is well understood if the two-point correlation of the Dirichlet data is known and sufficiently smooth.Unfortunately, the problem becomes much more involved in case of rough data. We will show that the concept of the H-matrix arithmetic provides a powerful tool to cope with this problem. By employing a parametric surface representation, we end up with an H-matrix arithmetic based on balanced cluster trees. This considerably simplifies the implementation and improves the performance of the H-matrix arithmetic. Numerical experiments are provided to validate and quantify the presented methods and algorithms

    An error indicator-based adaptive reduced order model for nonlinear structural mechanics -- application to high-pressure turbine blades

    Full text link
    The industrial application motivating this work is the fatigue computation of aircraft engines' high-pressure turbine blades. The material model involves nonlinear elastoviscoplastic behavior laws, for which the parameters depend on the temperature. For this application, the temperature loading is not accurately known and can reach values relatively close to the creep temperature: important nonlinear effects occur and the solution strongly depends on the used thermal loading. We consider a nonlinear reduced order model able to compute, in the exploitation phase, the behavior of the blade for a new temperature field loading. The sensitivity of the solution to the temperature makes {the classical unenriched proper orthogonal decomposition method} fail. In this work, we propose a new error indicator, quantifying the error made by the reduced order model in computational complexity independent of the size of the high-fidelity reference model. In our framework, when the {error indicator} becomes larger than a given tolerance, the reduced order model is updated using one time step solution of the high-fidelity reference model. The approach is illustrated on a series of academic test cases and applied on a setting of industrial complexity involving 5 million degrees of freedom, where the whole procedure is computed in parallel with distributed memory
    • …
    corecore