38,585 research outputs found

    Cascade Decoders-Based Autoencoders for Image Reconstruction

    Full text link
    Autoencoders are composed of coding and decoding units, hence they hold the inherent potential of high-performance data compression and signal compressed sensing. The main disadvantages of current autoencoders comprise the following several aspects: the research objective is not data reconstruction but feature representation; the performance evaluation of data recovery is neglected; it is hard to achieve lossless data reconstruction by pure autoencoders, even by pure deep learning. This paper aims for image reconstruction of autoencoders, employs cascade decoders-based autoencoders, perfects the performance of image reconstruction, approaches gradually lossless image recovery, and provides solid theory and application basis for autoencoders-based image compression and compressed sensing. The proposed serial decoders-based autoencoders include the architectures of multi-level decoders and the related optimization algorithms. The cascade decoders consist of general decoders, residual decoders, adversarial decoders and their combinations. It is evaluated by the experimental results that the proposed autoencoders outperform the classical autoencoders in the performance of image reconstruction

    Distributed compressed sensing for photo-acoustic imaging

    Get PDF
    Photo-Acoustic Tomography (PAT) combines ultrasound resolution and penetration with endogenous optical contrast of tissue. Real-time PAT imaging is limited by the number of parallel data acquisition channels and pulse repetition rate of the laser. Typical photoacoustic signals afford sparse representation. Additionally, PAT transducer configurations exhibit significant intra- and inter- signal correlation. In this work, we formulate photoacoustic signal recovery in the Distributed Compressed Sensing (DCS) framework to exploit this correlation. Reconstruction using the proposed method achieves better image quality than compressed sensing with significantly fewer samples. Through our results, we demonstrate that DCS has the potential to achieve real-time PAT imaging

    Compressed Sensing Based Computed Tomography Image Reconstruction

    Get PDF
    In computed tomography (CT), an important objective is to reduce radiation dose without degrading image quality. The radiation exposure from CT scan will make severe problem in humans. This has high risk in the case of children and female. The higher exposure will lead to leukemia, cancer etc. So that low dose CT image reconstruction is the main concern now days. We have to reconstruct the image which gives better image quality from limited number of projection. Compressed sensing enables the radiation dose to be reduced by producing diagnostic images from a limited no of projections. According to compressed sensing theory the signal or image  can be reconstructed from the fewer samples and the sparse representation is the main objective behind it. The images are not sparse in nature, so some sparsifying transform is used for make the image to sparse. The object to be reconstructed scanned under sensors and several forward projections are takes place. In low dose CT we consider only smaller number of projections. From these projections the images are reconstructed. The CT image reconstruction is an ill-posed problem. That means solving underdetermined system of equations. This system solve the reconstruction problem using compressed sensing. This system chooses the noiselet as measurement matrix and haar wavelet as representation basis. The incoherence between measurement matrix and the representation basis is the one main property of compressive sensing. This incoherence will make the image reconstruction

    Compressed sensing electron tomography using adaptive dictionaries: a simulation study

    Get PDF
    Electron tomography (ET) is an increasingly important technique for examining the three-dimensional morphologies of nanostructures. ET involves the acquisition of a set of 2D projection images to be reconstructed into a volumetric image by solving an inverse problem. However, due to limitations in the acquisition process this inverse problem is considered ill-posed (i.e., no unique solution exists). Furthermore reconstruction usually suffers from missing wedge artifacts (e.g., star, fan, blurring, and elongation artifacts). Compressed sensing (CS) has recently been applied to ET and showed promising results for reducing missing wedge artifacts caused by limited angle sampling. CS uses a nonlinear reconstruction algorithm that employs image sparsity as a priori knowledge to improve the accuracy of density reconstruction from a relatively small number of projections compared to other reconstruction techniques. However, The performance of CS recovery depends heavily on the degree of sparsity of the reconstructed image in the selected transform domain. Prespecified transformations such as spatial gradients provide sparse image representation, while synthesising the sparsifying transform based on the properties of the particular specimen may give even sparser results and can extend the application of CS to specimens that can not be sparsely represented with other transforms such as Total variation (TV). In this work, we show that CS reconstruction in ET can be significantly improved by tailoring the sparsity representation using a sparse dictionary learning principle

    Incremental refinement of image salient-point detection

    Get PDF
    Low-level image analysis systems typically detect "points of interest", i.e., areas of natural images that contain corners or edges. Most of the robust and computationally efficient detectors proposed for this task use the autocorrelation matrix of the localized image derivatives. Although the performance of such detectors and their suitability for particular applications has been studied in relevant literature, their behavior under limited input source (image) precision or limited computational or energy resources is largely unknown. All existing frameworks assume that the input image is readily available for processing and that sufficient computational and energy resources exist for the completion of the result. Nevertheless, recent advances in incremental image sensors or compressed sensing, as well as the demand for low-complexity scene analysis in sensor networks now challenge these assumptions. In this paper, we investigate an approach to compute salient points of images incrementally, i.e., the salient point detector can operate with a coarsely quantized input image representation and successively refine the result (the derived salient points) as the image precision is successively refined by the sensor. This has the advantage that the image sensing and the salient point detection can be terminated at any input image precision (e.g., bound set by the sensory equipment or by computation, or by the salient point accuracy required by the application) and the obtained salient points under this precision are readily available. We focus on the popular detector proposed by Harris and Stephens and demonstrate how such an approach can operate when the image samples are refined in a bitwise manner, i.e., the image bitplanes are received one-by-one from the image sensor. We estimate the required energy for image sensing as well as the computation required for the salient point detection based on stochastic source modeling. The computation and energy required by the proposed incremental refinement approach is compared against the conventional salient-point detector realization that operates directly on each source precision and cannot refine the result. Our experiments demonstrate the feasibility of incremental approaches for salient point detection in various classes of natural images. In addition, a first comparison between the results obtained by the intermediate detectors is presented and a novel application for adaptive low-energy image sensing based on points of saliency is presented

    Analysis of Inpainting via Clustered Sparsity and Microlocal Analysis

    Full text link
    Recently, compressed sensing techniques in combination with both wavelet and directional representation systems have been very effectively applied to the problem of image inpainting. However, a mathematical analysis of these techniques which reveals the underlying geometrical content is completely missing. In this paper, we provide the first comprehensive analysis in the continuum domain utilizing the novel concept of clustered sparsity, which besides leading to asymptotic error bounds also makes the superior behavior of directional representation systems over wavelets precise. First, we propose an abstract model for problems of data recovery and derive error bounds for two different recovery schemes, namely l_1 minimization and thresholding. Second, we set up a particular microlocal model for an image governed by edges inspired by seismic data as well as a particular mask to model the missing data, namely a linear singularity masked by a horizontal strip. Applying the abstract estimate in the case of wavelets and of shearlets we prove that -- provided the size of the missing part is asymptotically to the size of the analyzing functions -- asymptotically precise inpainting can be obtained for this model. Finally, we show that shearlets can fill strictly larger gaps than wavelets in this model.Comment: 49 pages, 9 Figure
    corecore