722 research outputs found

    ADDRESSING PARTIAL VOLUME ARTIFACTS WITH QUANTITATIVE COMPUTED TOMOGRAPHY-BASED FINITE ELEMENT MODELING OF THE HUMAN PROXIMAL TIBIA

    Get PDF
    Quantitative computed tomography (QCT) based finite element modeling (FE) has potential to clarify the role of subchondral bone stiffness in osteoarthritis. The limited spatial resolution of clinical CT systems, however, results in partial volume (PV) artifacts and low contrast between the cortical and trabecular bone, which adversely affect the accuracy of QCT-FE models. Using different cortical modeling and partial volume correction algorithms, the overall aim of this research was to improve the accuracy of QCT-FE predictions of stiffness at the proximal tibial subchondral surface. For Study #1, QCT-FE models of the human proximal tibia were developed by (1) separate modeling of cortical and trabecular bone (SM), and (2) continuum models (CM). QCT-FE models with SM and CM explained 76%-81% of the experimental stiffness variance with error ranging between 11.2% and 20.2%. SM did not offer any improvement relative to CM. The segmented cortical region indicated densities below the range reported for cortical bone, suggesting that cortical voxels were corrupted by PV artifacts. For Study #2, we corrected PV layers at the cortical bone using four different methods including: (1) Image Deblurring of all of the proximal tibia (IDA); (2) Image Deblurring of the cortical region (IDC); (3) Image Remapping (IR); and (4) Voxel Exclusion (VE). IDA resulted in low predictive accuracy with R2=50% and error of 76.4%. IDC explained 70% of the measured stiffness variance with 23.3% error. The IR approach resulted in an R2 of 81% with 10.6% error. VE resulted in the highest predictive accuracy with R2=84%, and 9.8% error. For Study #3, we investigated whether PV effects could be addressed by mapping bone’s elastic modulus (E) to mesh Gaussian points. Corresponding FE models using the Gauss-point method converged with larger elements when compared to the conventional method which assigned a single elastic modulus to each element (constant-E). The error at the converged mesh was similar for constant-E and Gauss-point; though, the Gauss-point method indicated this error with larger elements and less computation time (30 min vs 180 min). This research indicated that separate modeling of cortical and trabecular bone did not improve predictions of stiffness at the subchondral surface. However, this research did indicate that PV correction has potential to improve QCT-FE models of subchondral bone. These models may help to clarify the role of subchondral bone stiffness in knee OA pathogenesis with living people

    Medical image enhancement using threshold decomposition driven adaptive morphological filter

    Get PDF
    One of the most common degradations in medical images is their poor contrast quality. This suggests the use of contrast enhancement methods as an attempt to modify the intensity distribution of the image. In this paper, a new edge detected morphological filter is proposed to sharpen digital medical images. This is done by detecting the positions of the edges and then applying a class of morphological filtering. Motivated by the success of threshold decomposition, gradientbased operators are used to detect the locations of the edges. A morphological filter is used to sharpen these detected edges. Experimental results demonstrate that the detected edge deblurring filter improved the visibility and perceptibility of various embedded structures in digital medical images. Moreover, the performance of the proposed filter is superior to that of other sharpener-type filters

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data

    Full text link
    Optical coherence tomography (OCT) enables high-resolution and non-invasive 3D imaging of the human retina but is inherently impaired by speckle noise. This paper introduces a spatio-temporal denoising algorithm for OCT data on a B-scan level using a novel quantile sparse image (QuaSI) prior. To remove speckle noise while preserving image structures of diagnostic relevance, we implement our QuaSI prior via median filter regularization coupled with a Huber data fidelity model in a variational approach. For efficient energy minimization, we develop an alternating direction method of multipliers (ADMM) scheme using a linearization of median filtering. Our spatio-temporal method can handle both, denoising of single B-scans and temporally consecutive B-scans, to gain volumetric OCT data with enhanced signal-to-noise ratio. Our algorithm based on 4 B-scans only achieved comparable performance to averaging 13 B-scans and outperformed other current denoising methods.Comment: submitted to MICCAI'1

    Deep learning in computational microscopy

    Full text link
    We propose to use deep convolutional neural networks (DCNNs) to perform 2D and 3D computational imaging. Specifically, we investigate three different applications. We first try to solve the 3D inverse scattering problem based on learning a huge number of training target and speckle pairs. We also demonstrate a new DCNN architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM. Finally, we employ DCNN models that can predict focused 2D fluorescent microscopic images from blurred images captured at overfocused or underfocused planes.Published versio
    • …
    corecore