909 research outputs found

    A Distal Model of Congenital Nystagmus as Nonlinear Adaptive Oscillations

    Get PDF
    Congenital nystagmus (CN) is an incurable pathological spontaneous oscillation of the eyes with an onset in the first few months of life. The pathophysiology of CN is mysterious. There is no consistent neurological abnormality, but the majority of patients have a wide range of unrelated congenital visual abnormalities affecting either the cornea, lens, retina or optic nerve. In this theoretical study, we show that these eye oscillations could develop as an adaptive response to maximize visual contrast with poor foveal function in the infant visuomotor system, at a time of peak neural plasticity. We argue that in a visual system with abnormally poor high spatial frequency sensitivity, image contrast is not only maintained by keeping the image on the fovea (or its remnant) but also by some degree of image motion. Using the calculus of variations, we show that the optimal trade-off between these conflicting goals is to generate oscillatory eye movements with increasing velocity waveforms, as seen in real CN. When we include a stochastic component to the start of each epoch (quick-phase inaccuracy) various observed waveforms (including pseudo-cycloid) emerge as optimal strategies. Using the delay embedding technique, we find a low fractional dimension as reported in real data. We further show that, if a velocity command-based pre-motor circuitry (neural integrator) is harnessed to generate these waveforms, the emergence of a null region is inevitable. We conclude that CN could emerge paradoxically as an ‘optimal’ adaptive response in the infant visual system during an early critical period. This can explain why CN does not emerge later in life and why CN is so refractory to treatment. It also implies that any therapeutic intervention would need to be very early in life

    Recovering edges in ill-posed inverse problems: optimality of curvelet frames

    Get PDF
    We consider a model problem of recovering a function f(x1,x2)f(x_1,x_2) from noisy Radon data. The function ff to be recovered is assumed smooth apart from a discontinuity along a C2C^2 curve, that is, an edge. We use the continuum white-noise model, with noise level ε\varepsilon. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level ε\varepsilon only as O(ε1/2)O(\varepsilon^{1/2}) as ε→0\varepsilon\to 0. A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to O(ε2/3)O(\varepsilon^{2/3}). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain. We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE O(ε4/5)O(\varepsilon^{4/5}) as noise level ε→0\varepsilon\to 0. This rate of convergence holds uniformly over a class of functions which are C2C^2 except for discontinuities along C2C^2 curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example

    Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity

    Full text link
    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDEs) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data and, naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDEs in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Mat\'ern model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite-dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis

    On the electrophysiology of the atrial fast conduction system:an uncertain quantification study

    Get PDF
    Cardiac modeling entails the epistemic uncertainty of the input parameters, such as bundles and chambers geometry, electrical conductivities and cell parameters, thus calling for an uncertainty quantification (UQ) analysis. Since the cardiac activation and the subsequent muscular contraction is provided by a complex electrophysiology system made of interconnected conductive media, we focus here on the fast conductivity structures of the atria (internodal pathways) with the aim of identifying which of the uncertain inputs mostly influence the propagation of the depolarization front. Firstly, the distributions of the input parameters are calibrated using data available from the literature taking into account gender differences. The output quantities of interest (QoIs) of medical relevance are defined and a set of metamodels (one for each QoI) is then trained according to a polynomial chaos expansion (PCE) in order to run a global sensitivity analysis with non-linear variance-based Sobol’ indices with confidence intervals evaluated through the bootstrap method. The most sensitive parameters on each QoI are then identified for both genders showing the same order of importance of the model inputs on the electrical activation. Lastly, the probability distributions of the QoIs are obtained through a forward sensitivity analysis using the same trained metamodels. It results that several input parameters—including the position of the internodal pathways and the electrical impulse applied at the sinoatrial node—have a little influence on the QoIs studied. Vice-versa the electrical activation of the atrial fast conduction system is sensitive on the bundles geometry and electrical conductivities that need to be carefully measured or calibrated in order for the electrophysiology model to be accurate and predictive

    A locally adaptive kernel regression method for facies delineation

    Get PDF
    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance
    • …
    corecore