71,683 research outputs found

    A Multi-GPU Programming Library for Real-Time Applications

    Full text link
    We present MGPU, a C++ programming library targeted at single-node multi-GPU systems. Such systems combine disproportionate floating point performance with high data locality and are thus well suited to implement real-time algorithms. We describe the library design, programming interface and implementation details in light of this specific problem domain. The core concepts of this work are a novel kind of container abstraction and MPI-like communication methods for intra-system communication. We further demonstrate how MGPU is used as a framework for porting existing GPU libraries to multi-device architectures. Putting our library to the test, we accelerate an iterative non-linear image reconstruction algorithm for real-time magnetic resonance imaging using multiple GPUs. We achieve a speed-up of about 1.7 using 2 GPUs and reach a final speed-up of 2.1 with 4 GPUs. These promising results lead us to conclude that multi-GPU systems are a viable solution for real-time MRI reconstruction as well as signal-processing applications in general.Comment: 15 pages, 10 figure

    Theoretical and Numerical Approaches to Co-/Sparse Recovery in Discrete Tomography

    Get PDF
    We investigate theoretical and numerical results that guarantee the exact reconstruction of piecewise constant images from insufficient projections in Discrete Tomography. This is often the case in non-destructive quality inspection of industrial objects, made of few homogeneous materials, where fast scanning times do not allow for full sampling. As a consequence, this low number of projections presents us with an underdetermined linear system of equations. We restrict the solution space by requiring that solutions (a) must possess a sparse image gradient, and (b) have constrained pixel values. To that end, we develop an lower bound, using compressed sensing theory, on the number of measurements required to uniquely recover, by convex programming, an image in our constrained setting. We also develop a second bound, in the non-convex setting, whose novelty is to use the number of connected components when bounding the number of linear measurements for unique reconstruction. Having established theoretical lower bounds on the number of required measurements, we then examine several optimization models that enforce sparse gradients or restrict the image domain. We provide a novel convex relaxation that is provably tighter than existing models, assuming the target image to be gradient sparse and integer-valued. Given that the number of connected components in an image is critical for unique reconstruction, we provide an integer program model that restricts the maximum number of connected components in the reconstructed image. When solving the convex models, we view the image domain as a manifold and use tools from differential geometry and optimization on manifolds to develop a first-order multilevel optimization algorithm. The developed multilevel algorithm exhibits fast convergence and enables us to recover images of higher resolution

    Discrete Signal Reconstruction by Sum of Absolute Values

    Get PDF
    In this letter, we consider a problem of reconstructing an unknown discrete signal taking values in a finite alphabet from incomplete linear measurements. The difficulty of this problem is that the computational complexity of the reconstruction is exponential as it is. To overcome this difficulty, we extend the idea of compressed sensing, and propose to solve the problem by minimizing the sum of weighted absolute values. We assume that the probability distribution defined on an alphabet is known, and formulate the reconstruction problem as linear programming. Examples are shown to illustrate that the proposed method is effective.Comment: IEEE Signal Processing Letters (to appear

    Image formation in synthetic aperture radio telescopes

    Full text link
    Next generation radio telescopes will be much larger, more sensitive, have much larger observation bandwidth and will be capable of pointing multiple beams simultaneously. Obtaining the sensitivity, resolution and dynamic range supported by the receivers requires the development of new signal processing techniques for array and atmospheric calibration as well as new imaging techniques that are both more accurate and computationally efficient since data volumes will be much larger. This paper provides a tutorial overview of existing image formation techniques and outlines some of the future directions needed for information extraction from future radio telescopes. We describe the imaging process from measurement equation until deconvolution, both as a Fourier inversion problem and as an array processing estimation problem. The latter formulation enables the development of more advanced techniques based on state of the art array processing. We demonstrate the techniques on simulated and measured radio telescope data.Comment: 12 page

    Reconstruction of Binary Functions and Shapes from Incomplete Frequency Information

    Full text link
    The characterization of a binary function by partial frequency information is considered. We show that it is possible to reconstruct binary signals from incomplete frequency measurements via the solution of a simple linear optimization problem. We further prove that if a binary function is spatially structured (e.g. a general black-white image or an indicator function of a shape), then it can be recovered from very few low frequency measurements in general. These results would lead to efficient methods of sensing, characterizing and recovering a binary signal or a shape as well as other applications like deconvolution of binary functions blurred by a low-pass filter. Numerical results are provided to demonstrate the theoretical arguments.Comment: IEEE Transactions on Information Theory, 201

    Optical tomography system using charge-coupled device

    Get PDF
    This research presents an application of Charge-Coupled Device (CCD) linear sensor and laser diode in an optical tomography system. Optical tomography is a non-invasive and non-intrusive method of capturing a cross-sectional image of multiphase flow. The measurements are based on the final light intensity received by the sensor and this approach is limited to detecting solid objects only. The aim of this research was to analyse and demonstrate the capability of laser with a CCD in an optical tomography system for detecting different types of opaque objects in crystal clear water. The image reconstruction algorithms used in this research were filtered images of Linear Back Projection algorithms. These algorithms were programmed using LabVIEW programming software. Experiments in detecting solid and transparent objects were conducted, including experiments of rising air bubbles analysis. Based on the results, statistical analysis was performed to verify that the captured data were valid compared to the actual object data. The diameter and image of static solid and transparent objects were captured by this system, with 320 image views giving less area error than 160-views. This suggests that high image view resulted in high resolution image reconstruction. A moving object’s characteristics such as diameter, path and velocity can also be observed. The accuracy of this system in detecting object acceleration was 82%, while the average velocity of rising air bubbles captured was 0.2328 m/s. In conclusion, this research has successfully developed a non-intrusive and non-invasive optical tomography system that can detect static and moving objects in crystal clear water
    • …
    corecore