1,259 research outputs found
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Background: Prostate cancer is one of the most common forms of cancer found
in males making early diagnosis important. Magnetic resonance imaging (MRI) has
been useful in visualizing and localizing tumor candidates and with the use of
endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The
coils introduce intensity inhomogeneities and the surface coil intensity
correction built into MRI scanners is used to reduce these inhomogeneities.
However, the correction typically performed at the MRI scanner level leads to
noise amplification and noise level variations. Methods: In this study, we
introduce a new Monte Carlo-based noise compensation approach for coil
intensity corrected endorectal MRI which allows for effective noise
compensation and preservation of details within the prostate. The approach
accounts for the ERC SNR profile via a spatially-adaptive noise model for
correcting non-stationary noise variations. Such a method is useful
particularly for improving the image quality of coil intensity corrected
endorectal MRI data performed at the MRI scanner level and when the original
raw data is not available. Results: SNR and contrast-to-noise ratio (CNR)
analysis in patient experiments demonstrate an average improvement of 11.7 dB
and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong
performance when compared to existing approaches. Conclusions: A new noise
compensation method was developed for the purpose of improving the quality of
coil intensity corrected endorectal MRI data performed at the MRI scanner
level. We illustrate that promising noise compensation performance can be
achieved for the proposed approach, which is particularly important for
processing coil intensity corrected endorectal MRI data performed at the MRI
scanner level and when the original raw data is not available.Comment: 23 page
Adaptive Edge-guided Block-matching and 3D filtering (BM3D) Image Denoising Algorithm
Image denoising is a well studied field, yet reducing noise from images is still a valid challenge. Recently proposed Block-matching and 3D filtering (BM3D) is the current state of the art algorithm for denoising images corrupted by Additive White Gaussian noise (AWGN). Though BM3D outperforms all existing methods for AWGN denoising, still its performance decreases as the noise level increases in images, since it is harder to find proper match for reference blocks in the presence of highly corrupted pixel values. It also blurs sharp edges and textures. To overcome these problems we proposed an edge guided BM3D with selective pixel restoration. For higher noise levels it is possible to detect noisy pixels form its neighborhoods gray level statistics. We exploited this property to reduce noise as much as possible by applying a pre-filter. We also introduced an edge guided pixel restoration process in the hard-thresholding step of BM3D to restore the sharpness of edges and textures. Experimental results confirm that our proposed method is competitive and outperforms the state of the art BM3D in all considered subjective and objective quality measurements, particularly in preserving edges, textures and image contrast
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
Weighted Mean Curvature
In image processing tasks, spatial priors are essential for robust
computations, regularization, algorithmic design and Bayesian inference. In
this paper, we introduce weighted mean curvature (WMC) as a novel image prior
and present an efficient computation scheme for its discretization in practical
image processing applications. We first demonstrate the favorable properties of
WMC, such as sampling invariance, scale invariance, and contrast invariance
with Gaussian noise model; and we show the relation of WMC to area
regularization. We further propose an efficient computation scheme for
discretized WMC, which is demonstrated herein to process over 33.2
giga-pixels/second on GPU. This scheme yields itself to a convolutional neural
network representation. Finally, WMC is evaluated on synthetic and real images,
showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page
Geometric nonlinear diffusion filter and its application to X-ray imaging
<p>Abstract</p> <p>Background</p> <p>Denoising with edge preservation is very important in digital x-ray imaging since it may allow us to reduce x-ray dose in human subjects without noticeable degradation of the image quality. In denoising filter design for x-ray imaging, edge preservation as well as noise reduction is of great concern not to lose detailed spatial information for accurate diagnosis. In addition to this, fast computation is also important since digital x-ray images are mostly comprised of large sized matrices.</p> <p>Methods</p> <p>We have developed a new denoising filter based on the nonlinear diffusion filter model. Rather than employing four directional gradients around the pixel of interest, we use geometric parameters derived from the local pixel intensity distribution in calculating the diffusion coefficients in the horizontal and vertical directions. We have tested the filter performance, including edge preservation and noise reduction, using low dose digital radiography and micro-CT images.</p> <p>Results</p> <p>The proposed denoising filter shows performance similar to those of nonlinear anisotropic diffusion filters (ADFs), one Perona-Malik ADF and the other Weickert's ADF in terms of edge preservation and noise reduction. However, the computation time has been greatly reduced.</p> <p>Conclusions</p> <p>We expect the proposed denoising filter can be greatly used for fast noise reduction particularly in low-dose x-ray imaging.</p
Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy
Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice
- …