1,166 research outputs found

    DTI denoising for data with low signal to noise ratios

    Get PDF
    Low signal to noise ratio (SNR) experiments in diffusion tensor imaging (DTI) give key information about tracking and anisotropy, e. g., by measurements with small voxel sizes or with high b values. However, due to the complicated and dominating impact of thermal noise such data are still seldom analysed. In this paper Monte Carlo simulations are presented which investigate the distributions of noise for different DTI variables in low SNR situations. Based on this study a strategy for the application of spatial smoothing is derived. Optimal prerequisites for spatial filters are unbiased, bell shaped distributions with uniform variance, but, only few variables have a statistics close to that. To construct a convenient filter a chain of nonlinear Gaussian filters is adapted to peculiarities of DTI and a bias correction is introduced. This edge preserving three dimensional filter is then validated via a quasi realistic model. Further, it is shown that for small sample sizes the filter is as effective as a maximum likelihood estimator and produces reliable results down to a local SNR of approximately 1. The filter is finally applied to very recent data with isotropic voxels of the size 1Ɨ1Ɨ1mm^3 which corresponds to a spatially mean SNR of 2.5. This application demonstrates the statistical robustness of the filter method. Though the Rician noise model is only approximately realized in the data, the gain of information by spatial smoothing is considerable

    A superior edge preserving filter with a systematic analysis

    Get PDF
    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Spatial Smoothing for Diffusion Tensor Imaging with low Signal to Noise Ratios

    Get PDF
    Though low signal to noise ratio (SNR) experiments in DTI give key information about tracking and anisotropy, e.g. by measurements with very small voxel sizes, due to the complicated impact of thermal noise such experiments are up to now seldom analysed. In this paper Monte Carlo simulations are presented which investigate the random fields of noise for different DTI variables in low SNR situations. Based on this study a strategy for spatial smoothing, which demands essentially uniform noise, is derived. To construct a convenient filter the weights of the nonlinear Aurich chain are adapted to DTI. This edge preserving three dimensional filter is then validated in different variants via a quasi realistic model and is applied to very new data with isotropic voxels of the size 1x1x1 mm3 which correspond to a spatial mean SNR of approximately 3

    Contour extraction from HVEM image of microvessel using active contour models

    Get PDF
    This thesis reports the research results on automatic contour extraction from high voltage electron microscope (HVEM) image of thick cross section montages of small blood vessels. The previous work on this subject, which was based on the conventional edge detection operations combined with edge linking, has proven inadequate to describe the inner structural compartments of microvessels. In this thesis, an active contour model (commonly referred to as Snakes ) has been applied to advance the previous work. Active contour models have proven themselves to be a powerful and flexible paradigm for many problems in image understanding, especially in contour extraction from medical images. With the developed energy functions, the active contour is attracted towards the edges under the action of internal forces (describing some elasticity properties of the contour), image forces and external forces by means of minimization of the energy functions. Based on this active model, an effective algorithm is implemented as a powerful tool for 2-D contour extraction in our problem for the first time. The results thus obtained turn out to be encouraging

    Smooth representation of thin shells and volume structures for isogeometric analysis

    Get PDF
    The purpose of this study is to develop self-contained methods for obtaining smooth meshes which are compatible with isogeometric analysis (IGA). The study contains three main parts. We start by developing a better understanding of shapes and splines through the study of an image-related problem. Then we proceed towards obtaining smooth volumetric meshes of the given voxel-based images. Finally, we treat the smoothness issue on the multi-patch domains with C1 coupling. Following are the highlights of each part. First, we present a B-spline convolution method for boundary representation of voxel-based images. We adopt the filtering technique to compute the B-spline coefficients and gradients of the images effectively. We then implement the B-spline convolution for developing a non-rigid images registration method. The proposed method is in some sense of ā€œisoparametricā€, for which all the computation is done within the B-splines framework. Particularly, updating the images by using B-spline composition promote smooth transformation map between the images. We show the possible medical applications of our method by applying it for registration of brain images. Secondly, we develop a self-contained volumetric parametrization method based on the B-splines boundary representation. We aim to convert a given voxel-based data to a matching C1 representation with hierarchical cubic splines. The concept of the osculating circle is employed to enhance the geometric approximation, where it is done by a single template and linear transformations (scaling, translations, and rotations) without the need for solving an optimization problem. Moreover, we use the Laplacian smoothing and refinement techniques to avoid irregular meshes and to improve mesh quality. We show with several examples that the method is capable of handling complex 2D and 3D configurations. In particular, we parametrize the 3D Stanford bunny which contains irregular shapes and voids. Finally, we propose the BĀ“ezier ordinates approach and splines approach for C1 coupling. In the first approach, the new basis functions are defined in terms of the BĀ“ezier Bernstein polynomials. For the second approach, the new basis is defined as a linear combination of C0 basis functions. The methods are not limited to planar or bilinear mappings. They allow the modeling of solutions to fourth order partial differential equations (PDEs) on complex geometric domains, provided that the given patches are G1 continuous. Both methods have their advantages. In particular, the BĀ“ezier approach offer more degree of freedoms, while the spline approach is more computationally efficient. In addition, we proposed partial degree elevation to overcome the C1-locking issue caused by the over constraining of the solution space. We demonstrate the potential of the resulting C1 basis functions for application in IGA which involve fourth order PDEs such as those appearing in Kirchhoff-Love shell models, Cahn-Hilliard phase field application, and biharmonic problems
    • ā€¦
    corecore