833 research outputs found

    Full Motion and Flow Field Recovery from Echo Doppler Data

    Get PDF
    We present a new computational method for reconstructing a vector velocity field from scattered, pulsed-wave ultrasound Doppler data. The main difficulty is that the Doppler measurements are incomplete, for they do only capture the velocity component along the beam direction. We thus propose to combine measurements from different beam directions. However, this is not yet sufficient to make the problem well posed because 1) the angle between the directions is typically small and 2) the data is noisy and nonuniformly sampled. We propose to solve this reconstruction problem in the continuous domain using regularization. The reconstruction is formulated as the minimizer of a cost that is a weighted sum of two terms: 1) the sum of squared difference between the Doppler data and the projected velocities 2) a quadratic regularization functional that imposes some smoothness on the velocity field. We express our solution for this minimization problem in a B-spline basis, obtaining a sparse system of equations that can be solved efficiently. Using synthetic phantom data, we demonstrate the significance of tuning the regularization according to the a priori knowledge about the physical property of the motion. Next, we validate our method using real phantom data for which the ground truth is known. We then present reconstruction results obtained from clinical data that originate from 1) blood flow in carotid bifurcation and 2) cardiac wall motion

    A Survey of Signal Processing Problems and Tools in Holographic Three-Dimensional Television

    Get PDF
    Cataloged from PDF version of article.Diffraction and holography are fertile areas for application of signal theory and processing. Recent work on 3DTV displays has posed particularly challenging signal processing problems. Various procedures to compute Rayleigh-Sommerfeld, Fresnel and Fraunhofer diffraction exist in the literature. Diffraction between parallel planes and tilted planes can be efficiently computed. Discretization and quantization of diffraction fields yield interesting theoretical and practical results, and allow efficient schemes compared to commonly used Nyquist sampling. The literature on computer-generated holography provides a good resource for holographic 3DTV related issues. Fast algorithms to compute Fourier, Walsh-Hadamard, fractional Fourier, linear canonical, Fresnel, and wavelet transforms, as well as optimization-based techniques such as best orthogonal basis, matching pursuit, basis pursuit etc., are especially relevant signal processing techniques for wave propagation, diffraction, holography, and related problems. Atomic decompositions, multiresolution techniques, Gabor functions, and Wigner distributions are among the signal processing techniques which have or may be applied to problems in optics. Research aimed at solving such problems at the intersection of wave optics and signal processing promises not only to facilitate the development of 3DTV systems, but also to contribute to fundamental advances in optics and signal processing theory. © 2007 IEEE

    Image segmentation and reconstruction of 3D surfaces from carotid ultrasound images

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Source Detection and Image Reconstruction with Position-Sensitive Gamma-Ray Detectors.

    Full text link
    Gamma-ray detectors have important applications in security, medicine, and nuclear non-proliferation. This thesis investigates the use of regularization to improve image reconstruction and efficient methods for predicting source detection performance with position-sensitive gamma-ray detectors. Position-sensitive detectors have the ability to measure the spatial gamma emission density around the detector. An image of the spatial gamma emission density where a spatially small source is present will be sparse in the canonical basis, meaning that the emission density is zero for most directions but large for a small number of directions. This work uses regularization to enforce sparsity in the reconstructed image, and proposes a regularizer that effectively enforces sparsity in the reconstructed images. This work also proposes a method for predicting detection performance. Position-sensitive gamma-ray imaging systems are complex and difficult to model both accurately and efficiently. This work investigates the asymptotic properties of tests based on maximum likelihood (ML) estimates under model mismatch, meaning that the statistical model used for detection differs from the true distribution. We propose general expressions for the asymptotic distribution of likelihood-based test statistics when the number of measurements is Poisson. We use the general expressions to derive expressions specific to gamma-ray source detection that one can evaluate using a modest amount of data from a real system or Monte-Carlo simulation. We show empirically with simulated data that the proposed expressions yield more accurate detection performance predictions than expressions that ignore model mismatch. We also use data recorded with a 3D position-sensitive CdZnTe system with a Cs-137 source in a natural background to show that the proposed method is reasonably accurate with real data. These expressions require less data and computation than conventional empirical methods. To quantify the benefit of position-sensitivity, we state and prove a theorem affirming that, asymptotically as scan time becomes large, position-sensitivity increases the area under the receiver operating characteristic curve (AUC) when the background intensity is known, detector sensitivity is spatially uniform, and the system model is correctly specified.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91441/1/danling_1.pd

    Joint Image and Depth Estimation With Mask-Based Lensless Cameras

    Get PDF
    Mask-based lensless cameras replace the lens of a conventional camera with a custom mask. These cameras can potentially be very thin and even flexible. Recently, it has been demonstrated that such mask-based cameras can recover light intensity and depth information of a scene. Existing depth recovery algorithms either assume that the scene consists of a small number of depth planes or solve a sparse recovery problem over a large 3D volume. Both these approaches fail to recover the scenes with large depth variations. In this paper, we propose a new approach for depth estimation based on an alternating gradient descent algorithm that jointly estimates a continuous depth map and light distribution of the unknown scene from its lensless measurements. We present simulation results on image and depth reconstruction for a variety of 3D test scenes. A comparison between the proposed algorithm and other method shows that our algorithm is more robust for natural scenes with a large range of depths. We built a prototype lensless camera and present experimental results for reconstruction of intensity and depth maps of different real objects

    Super-resolution imaging of fluorescent dipoles via polarized structured illumination microscopy

    Full text link
    © 2019, The Author(s). Fluorescence polarization microscopy images both the intensity and orientation of fluorescent dipoles and plays a vital role in studying molecular structures and dynamics of bio-complexes. However, current techniques remain difficult to resolve the dipole assemblies on subcellular structures and their dynamics in living cells at super-resolution level. Here we report polarized structured illumination microscopy (pSIM), which achieves super-resolution imaging of dipoles by interpreting the dipoles in spatio-angular hyperspace. We demonstrate the application of pSIM on a series of biological filamentous systems, such as cytoskeleton networks and λ-DNA, and report the dynamics of short actin sliding across a myosin-coated surface. Further, pSIM reveals the side-by-side organization of the actin ring structures in the membrane-associated periodic skeleton of hippocampal neurons and images the dipole dynamics of green fluorescent protein-labeled microtubules in live U2OS cells. pSIM applies directly to a large variety of commercial and home-built SIM systems with various imaging modality

    Current Approaches for Image Fusion of Histological Data with Computed Tomography and Magnetic Resonance Imaging

    Get PDF
    Classical analysis of biological samples requires the destruction of the tissue’s integrity by cutting or grinding it down to thin slices for (Immuno)-histochemical staining and microscopic analysis. Despite high specificity, encoded in the stained 2D section of the whole tissue, the structural information, especially 3D information, is limited. Computed tomography (CT) or magnetic resonance imaging (MRI) scans performed prior to sectioning in combination with image registration algorithms provide an opportunity to regain access to morphological characteristics as well as to relate histological findings to the 3D structure of the local tissue environment. This review provides a summary of prevalent literature addressing the problem of multimodal coregistration of hard- and soft-tissue in microscopy and tomography. Grouped according to the complexity of the dimensions, including image-to-volume (2D ⟶ 3D), image-to-image (2D ⟶ 2D), and volume-to-volume (3D ⟶ 3D), selected currently applied approaches are investigated by comparing the method accuracy with respect to the limiting resolution of the tomography. Correlation of multimodal imaging could position itself as a useful tool allowing for precise histological diagnostic and allow the a priori planning of tissue extraction like biopsies

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Doctor of Philosophy

    Get PDF
    dissertationMicrowave/millimeter-wave imaging systems have become ubiquitous and have found applications in areas like astronomy, bio-medical diagnostics, remote sensing, and security surveillance. These areas have so far relied on conventional imaging devices (empl
    • …
    corecore