1,215 research outputs found

    Locally adaptive image denoising by a statistical multiresolution criterion

    Full text link
    We demonstrate how one can choose the smoothing parameter in image denoising by a statistical multiresolution criterion, both globally and locally. Using inhomogeneous diffusion and total variation regularization as examples for localized regularization schemes, we present an efficient method for locally adaptive image denoising. As expected, the smoothing parameter serves as an edge detector in this framework. Numerical examples illustrate the usefulness of our approach. We also present an application in confocal microscopy

    Research Status and Prospect for CT Imaging

    Get PDF
    Computed tomography (CT) is a very valuable imaging method and plays an important role in clinical diagnosis. As people pay more and more attention to radiation doses these years, decreasing CT radiation dose without affecting image quality is a hot direction for research of medical imaging in recent years. This chapter introduces the research status of low-dose technology from following aspects: low-dose scan implementation, reconstruction methods and image processing methods. Furthermore, other technologies related to the development tendency of CT, such as automatic tube current modulation technology, rapid peak kilovoltage (kVp) switching technology, dual-source CT technology and Nano-CT, are also summarized. Finally, the future research prospect are discussed and analyzed

    Improving Image Reconstruction for Digital Breast Tomosynthesis

    Full text link
    Digital breast tomosynthesis (DBT) has been developed to reduce the issue of overlapping tissue in conventional 2-D mammography for breast cancer screening and diagnosis. In the DBT procedure, the patient’s breast is compressed with a paddle and a sequence of x-ray projections is taken within a small angular range. Tomographic reconstruction algorithms are then applied to these projections, generating tomosynthesized image slices of the breast, such that radiologists can read the breast slice by slice. Studies have shown that DBT can reduce both false-negative diagnoses of breast cancer and false-positive recalls compared to mammography alone. This dissertation focuses on improving image quality for DBT reconstruction. Chapter I briefly introduces the concept of DBT and the inspiration of my study. Chapter II covers the background of my research including the concept of image reconstruction, the geometry of our experimental DBT system and figures of merit for image quality. Chapter III introduces our study of the segmented separable footprint (SG) projector. By taking into account the finite size of detector element, the SG projector improves the accuracy of forward projections in iterative image reconstruction. Due to the more efficient access to memory, the SG projector is also faster than the traditional ray-tracing (RT) projector. We applied the SG projector to regular and subpixel reconstructions and demonstrated its effectiveness. Chapter IV introduces a new DBT reconstruction method with detector blur and correlated noise modeling, called the SQS-DBCN algorithm. The SQS-DBCN algorithm is able to significantly enhance microcalcifications (MC) in DBT while preserving the appearance of the soft tissue and mass margin. Comparisons between the SQS-DBCN algorithm and several modified versions of the SQS-DBCN algorithm indicate the importance of modeling different components of the system physics at the same time. Chapter V investigates truncated projection artifact (TPA) removal algorithms. Among the three algorithms we proposed, the pre-reconstruction-based projection view (PV) extrapolation method provides the best performance. Possible improvements of the other two TPA removal algorithms have been discussed. Chapter VI of this dissertation examines the effect of source blur on DBT reconstruction. Our analytical calculation demonstrates that the point spread function (PSF) of source blur is highly shift-variant. We used CatSim to simulate digital phantoms. Analysis on the reconstructed images demonstrates that a typical finite-sized focal spot (~ 0.3 mm) will not affect the image quality if the x-ray tube is stationary during the data acquisition. For DBT systems with continuous-motion data acquisition, the motion of the x-ray tube is the main cause of the effective source blur and will cause loss in the contrast of objects. Therefore modeling the source blur for these DBT systems could potentially improve the reconstructed image quality. The final chapter of this dissertation discusses a few future studies that are inspired by my PhD research.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144059/1/jiabei_1.pd

    GPU-based Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    Full text link
    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, TV regularization may lead to over-smoothed images and lost edge information. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to avoid over-smoothing, which is realized by introducing a penalty weight to the original total variation norm. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of fine structures and therefore maintain acceptable spatial resolution.Comment: 21 pages, 6 figures, 2 table

    Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging

    Full text link
    The ability to see around corners, i.e., recover details of a hidden scene from its reflections in the surrounding environment, is of considerable interest in a wide range of applications. However, the diffuse nature of light reflected from typical surfaces leads to mixing of spatial information in the collected light, precluding useful scene reconstruction. Here, we employ a computational imaging technique that opportunistically exploits the presence of occluding objects, which obstruct probe-light propagation in the hidden scene, to undo the mixing and greatly improve scene recovery. Importantly, our technique obviates the need for the ultrafast time-of-flight measurements employed by most previous approaches to hidden-scene imaging. Moreover, it does so in a photon-efficient manner based on an accurate forward model and a computational algorithm that, together, respect the physics of three-bounce light propagation and single-photon detection. Using our methodology, we demonstrate reconstruction of hidden-surface reflectivity patterns in a meter-scale environment from non-time-resolved measurements. Ultimately, our technique represents an instance of a rich and promising new imaging modality with important potential implications for imaging science.Comment: Related theory in arXiv:1711.0629

    Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells

    Full text link
    Adherent cells exert traction forces on to their environment, which allows them to migrate, to maintain tissue integrity, and to form complex multicellular structures. This traction can be measured in a perturbation-free manner with traction force microscopy (TFM). In TFM, traction is usually calculated via the solution of a linear system, which is complicated by undersampled input data, acquisition noise, and large condition numbers for some methods. Therefore, standard TFM algorithms either employ data filtering or regularization. However, these approaches require a manual selection of filter- or regularization parameters and consequently exhibit a substantial degree of subjectiveness. This shortcoming is particularly serious when cells in different conditions are to be compared because optimal noise suppression needs to be adapted for every situation, which invariably results in systematic errors. Here, we systematically test the performance of new methods from computer vision and Bayesian inference for solving the inverse problem in TFM. We compare two classical schemes, L1- and L2-regularization, with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. Using artificial data and experimental data, we show that these methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Thus, Bayesian methods can mitigate the considerable uncertainty inherent in comparing cellular traction forces

    Denoising of Fluorescence Image on the Surface of Quantum Dot/Nanoporous Silicon Biosensors

    Get PDF
    In the process of biological detection of porous silicon photonic crystals based on quantum dots, the concentration of target organisms can be indirectly measured via the change in the gray value of the fluorescence emitted from the quantum dots in the porous silicon pores before and after the biological reaction on the surface of the device. However, due to the disordered nanostructures in porous silicon and the roughness of the surface, the fluorescence images on the surface contain noise. This paper analyzes the type of noise and its influence on the gray value of fluorescent images. The change in the gray value caused by noise greatly reduces the detection sensitivity. To reduce the influence of noise on the gray value of quantum dot fluorescence images, this paper proposes a denoising method based on gray compression and nonlocal anisotropic diffusion filtering. We used the proposed method to denoise the quantum dot fluorescence image after DNA hybridization in a Bragg structure porous silicon device. The experimental results show that the sensitivity of digital image detection improved significantly after denoising
    • …
    corecore