4,696 research outputs found

    Image Segmentation with Eigenfunctions of an Anisotropic Diffusion Operator

    Full text link
    We propose the eigenvalue problem of an anisotropic diffusion operator for image segmentation. The diffusion matrix is defined based on the input image. The eigenfunctions and the projection of the input image in some eigenspace capture key features of the input image. An important property of the model is that for many input images, the first few eigenfunctions are close to being piecewise constant, which makes them useful as the basis for a variety of applications such as image segmentation and edge detection. The eigenvalue problem is shown to be related to the algebraic eigenvalue problems resulting from several commonly used discrete spectral clustering models. The relation provides a better understanding and helps developing more efficient numerical implementation and rigorous numerical analysis for discrete spectral segmentation methods. The new continuous model is also different from energy-minimization methods such as geodesic active contour in that no initial guess is required for in the current model. The multi-scale feature is a natural consequence of the anisotropic diffusion operator so there is no need to solve the eigenvalue problem at multiple levels. A numerical implementation based on a finite element method with an anisotropic mesh adaptation strategy is presented. It is shown that the numerical scheme gives much more accurate results on eigenfunctions than uniform meshes. Several interesting features of the model are examined in numerical examples and possible applications are discussed

    Monotonicity-preserving finite element schemes based on differentiable nonlinear stabilization

    Get PDF
    In this work, we propose a nonlinear stabilization technique for scalar conservation laws with implicit time stepping. The method relies on an artificial diffusion method, based on a graph-Laplacian operator. It is nonlinear, since it depends on a shock detector. Further, the resulting method is linearity preserving. The same shock detector is used to gradually lump the mass matrix. The resulting method is LED, positivity preserving, and also satisfies a global DMP. Lipschitz continuity has also been proved. However, the resulting scheme is highly nonlinear, leading to very poor nonlinear convergence rates. We propose a smooth version of the scheme, which leads to twice differentiable nonlinear stabilization schemes. It allows one to straightforwardly use Newton’s method and obtain quadratic convergence. In the numerical experiments, steady and transient linear transport, and transient Burgers’ equation have been considered in 2D. Using the Newton method with a smooth version of the scheme we can reduce 10 to 20 times the number of iterations of Anderson acceleration with the original non-smooth scheme. In any case, these properties are only true for the converged solution, but not for iterates. In this sense, we have also proposed the concept of projected nonlinear solvers, where a projection step is performed at the end of every nonlinear iterations onto a FE space of admissible solutions. The space of admissible solutions is the one that satisfies the desired monotonic properties (maximum principle or positivity).Peer ReviewedPostprint (author's final draft

    Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

    Get PDF
    Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L1_1 regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and improved clarit

    Thermal Radiation from a Fluctuating Event Horizon

    Full text link
    We consider a pointlike two-level system undergoing uniformly accelerated motion. We evaluate the transition probability for a finite time interval of this system coupled to a massless scalar field near a fluctuating event horizon. Horizon fluctuations are modeled using a random noise which generates light-cone fluctuations. We study the case of centered, stationary and Gaussian random processes. The transition probability of the system is obtained from the positive-frequency Wightman function calculated to one loop order in the noise averaging process. Our results show that the fluctuating horizon modifies the thermal radiation but leaves unchanged the temperature associated with the acceleration.Comment: 8 pages, 3 figure

    Noncommutative fields and the short-scale structure of spacetime

    Full text link
    There is a growing evidence that due to quantum gravity effects the effective spacetime dimensionality might change in the UV. In this letter we investigate this hypothesis by using quantum fields to derive the UV behaviour of the static, two point sources potential. We mimic quantum gravity effects by using non-commutative fields associated to a Lie group momentum space with a Planck mass curvature scale. We find that the static potential becomes finite in the short-distance limit. This indicates that quantum gravity effects lead to a dimensional reduction in the UV or, alternatively, that point-like sources are effectively smoothed out by the Planck scale features of the non-commutative quantum fields.Comment: 12 pages, 2 figure

    Reliable inference of exoplanet light curve parameters using deterministic and stochastic systematics models

    Full text link
    Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation, resulting in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light curve parameters conditioned on the subjective choice of models and model selection criteria. This paper aims to test the reliability of the most commonly used systematics models and model selection criteria. As we are primarily interested in recovering light curve parameters rather than the favoured systematics model, marginalisation over systematics models is introduced as a more robust alternative than simple model selection. This can incorporate uncertainties in the choice of systematics model into the error budget as well as the model parameters. Its use is demonstrated using a series of simulated transit light curves. Stochastic models, specifically Gaussian processes, are also discussed in the context of marginalisation over systematics models, and are found to reliably recover the transit parameters for a wide range of systematics functions. None of the tested model selection criteria - including the BIC - routinely recovered the correct model. This means that commonly used methods that are based on simple model selection may underestimate the uncertainties when extracting transmission and eclipse spectra from real data, and low-significance claims using such techniques should be treated with caution. In general, no systematics modelling techniques are perfect; however, marginalisation over many systematics models helps to mitigate poor model selection, and stochastic processes provide an even more flexible approach to modelling instrumental systematics.Comment: 15 pages, 2 figures, published in MNRAS, typo in footnote eq correcte
    • …
    corecore