8,640 research outputs found

    Compressive Time-of-Flight 3D Imaging Using Block-Structured Sensing Matrices

    Full text link
    Spatially and temporally highly resolved depth information enables numerous applications including human-machine interaction in gaming or safety functions in the automotive industry. In this paper, we address this issue using Time-of-flight (ToF) 3D cameras which are compact devices providing highly resolved depth information. Practical restrictions often require to reduce the amount of data to be read-out and transmitted. Using standard ToF cameras, this can only be achieved by lowering the spatial or temporal resolution. To overcome such a limitation, we propose a compressive ToF camera design using block-structured sensing matrices that allows to reduce the amount of data while keeping high spatial and temporal resolution. We propose the use of efficient reconstruction algorithms based on l^1-minimization and TV-regularization. The reconstruction methods are applied to data captured by a real ToF camera system and evaluated in terms of reconstruction quality and computational effort. For both, l^1-minimization and TV-regularization, we use a local as well as a global reconstruction strategy. For all considered instances, global TV-regularization turns out to clearly perform best in terms of evaluation metrics including the PSNR.Comment: According to a suggestion, we changed the old title "A Framework for Compressive Time-of-Flight 3D Sensing" to "Compressive Time-of-Flight 3D Imaging Using Block-Structured Sensing Matrices

    Compressive Sensing of Large-Scale Images: An Assumption-Free Approach

    Full text link
    Cost-efficient compressive sensing of big media data with fast reconstructed high-quality results is very challenging. In this paper, we propose a new large-scale image compressive sensing method, composed of operator-based strategy in the context of fixed point continuation method and weighted LASSO with tree structure sparsity pattern. The main characteristic of our method is free from any assumptions and restrictions. The feasibility of our method is verified via simulations and comparisons with state-of-the-art algorithms.Comment: 8 pages, 5 figure

    Robust Coding of Encrypted Images via Structural Matrix

    Full text link
    The robust coding of natural images and the effective compression of encrypted images have been studied individually in recent years. However, little work has been done in the robust coding of encrypted images. The existing results in these two individual research areas cannot be combined directly for the robust coding of encrypted images. This is because the robust coding of natural images relies on the elimination of spatial correlations using sparse transforms such as discrete wavelet transform (DWT), which is ineffective to encrypted images due to the weak correlation between encrypted pixels. Moreover, the compression of encrypted images always generates code streams with different significance. If one or more such streams are lost, the quality of the reconstructed images may drop substantially or decoding error may exist, which violates the goal of robust coding of encrypted images. In this work, we intend to design a robust coder, based on compressive sensing with structurally random matrix, for encrypted images over packet transmission networks. The proposed coder can be applied in the scenario that Alice needs a semi-trusted channel provider Charlie to encode and transmit the encrypted image to Bob. In particular, Alice first encrypts an image using globally random permutation and then sends the encrypted image to Charlie who samples the encrypted image using a structural matrix. Through an imperfect channel with packet loss, Bob receives the compressive measurements and reconstructs the original image by joint decryption and decoding. Experimental results show that the proposed coder can be considered as an efficient multiple description coder with a number of descriptions against packet loss.Comment: 10 pages, 11 figure

    Dictionary and Image Recovery from Incomplete and Random Measurements

    Full text link
    This paper tackles algorithmic and theoretical aspects of dictionary learning from incomplete and random block-wise image measurements and the performance of the adaptive dictionary for sparse image recovery. This problem is related to blind compressed sensing in which the sparsifying dictionary or basis is viewed as an unknown variable and subject to estimation during sparse recovery. However, unlike existing guarantees for a successful blind compressed sensing, our results do not rely on additional structural constraints on the learned dictionary or the measured signal. In particular, we rely on the spatial diversity of compressive measurements to guarantee that the solution is unique with a high probability. Moreover, our distinguishing goal is to measure and reduce the estimation error with respect to the ideal dictionary that is based on the complete image. Using recent results from random matrix theory, we show that applying a slightly modified dictionary learning algorithm over compressive measurements results in accurate estimation of the ideal dictionary for large-scale images. Empirically, we experiment with both space-invariant and space-varying sensing matrices and demonstrate the critical role of spatial diversity in measurements. Simulation results confirm that the presented algorithm outperforms the typical non-adaptive sparse recovery based on offline-learned universal dictionaries

    ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements

    Full text link
    The goal of this paper is to present a non-iterative and more importantly an extremely fast algorithm to reconstruct images from compressively sensed (CS) random measurements. To this end, we propose a novel convolutional neural network (CNN) architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction. We call this network, ReconNet. The intermediate reconstruction is fed into an off-the-shelf denoiser to obtain the final reconstructed image. On a standard dataset of images we show significant improvements in reconstruction results (both in terms of PSNR and time complexity) over state-of-the-art iterative CS reconstruction algorithms at various measurement rates. Further, through qualitative experiments on real data collected using our block single pixel camera (SPC), we show that our network is highly robust to sensor noise and can recover visually better quality images than competitive algorithms at extremely low sensing rates of 0.1 and 0.04. To demonstrate that our algorithm can recover semantically informative images even at a low measurement rate of 0.01, we present a very robust proof of concept real-time visual tracking application.Comment: Accepted at IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 201

    LAMP: A Locally Adapting Matching Pursuit Framework for Group Sparse Signatures in Ultra-Wide Band Radar Imaging

    Full text link
    It has been found that radar returns of extended targets are not only sparse but also exhibit a tendency to cluster into randomly located, variable sized groups. However, the standard techniques of Compressive Sensing as applied in radar imaging hardly considers the clustering tendency into account while reconstructing the image from the compressed measurements. If the group sparsity is taken into account, it is intuitive that one might obtain better results both in terms of accuracy and time complexity as compared to the conventional recovery techniques like Orthogonal Matching Pursuit (OMP). In order to remedy this, techniques like Block OMP have been used in the existing literature. An alternate approach is via reconstructing the signal by transforming into the Hough Transform Domain where they become point-wise sparse. However, these techniques essentially assume specific size and structure of the groups and are not always effective if the exact characteristics of the groups are not known, prior to reconstruction. In this manuscript, a novel framework that we call locally adapting matching pursuit (LAMP) have been proposed for efficient reconstruction of group sparse signals from compressed measurements without assuming any specific size, location, or structure of the groups. The recovery guarantee of the LAMP and its superiority compared to the existing algorithms has been established with respect to accuracy, time complexity and flexibility in group size. LAMP has been successfully used on a real-world, experimental data set.Comment: 14 pages,22 figures, Draft to be submitted to journa

    Fast Greedy Approaches for Compressive Sensing of Large-Scale Signals

    Full text link
    Cost-efficient compressive sensing is challenging when facing large-scale data, {\em i.e.}, data with large sizes. Conventional compressive sensing methods for large-scale data will suffer from low computational efficiency and massive memory storage. In this paper, we revisit well-known solvers called greedy algorithms, including Orthogonal Matching Pursuit (OMP), Subspace Pursuit (SP), Orthogonal Matching Pursuit with Replacement (OMPR). Generally, these approaches are conducted by iteratively executing two main steps: 1) support detection and 2) solving least square problem. To reduce the cost of Step 1, it is not hard to employ the sensing matrix that can be implemented by operator-based strategy instead of matrix-based one and can be speeded by fast Fourier Transform (FFT). Step 2, however, requires maintaining and calculating a pseudo-inverse of a sub-matrix, which is random and not structural, and, thus, operator-based matrix does not work. To overcome this difficulty, instead of solving Step 2 by a closed-form solution, we propose a fast and cost-effective least square solver, which combines a Conjugate Gradient (CG) method with our proposed weighted least square problem to iteratively approximate the ground truth yielded by a greedy algorithm. Extensive simulations and theoretical analysis validate that the proposed method is cost-efficient and is readily incorporated with the existing greedy algorithms to remarkably improve the performance for large-scale problems.Comment: 10 pages, 3 figures, 4 table

    Convolutional Neural Networks for Non-iterative Reconstruction of Compressively Sensed Images

    Full text link
    Traditional algorithms for compressive sensing recovery are computationally expensive and are ineffective at low measurement rates. In this work, we propose a data driven non-iterative algorithm to overcome the shortcomings of earlier iterative algorithms. Our solution, ReconNet, is a deep neural network, whose parameters are learned end-to-end to map block-wise compressive measurements of the scene to the desired image blocks. Reconstruction of an image becomes a simple forward pass through the network and can be done in real-time. We show empirically that our algorithm yields reconstructions with higher PSNRs compared to iterative algorithms at low measurement rates and in presence of measurement noise. We also propose a variant of ReconNet which uses adversarial loss in order to further improve reconstruction quality. We discuss how adding a fully connected layer to the existing ReconNet architecture allows for jointly learning the measurement matrix and the reconstruction algorithm in a single network. Experiments on real data obtained from a block compressive imager show that our networks are robust to unseen sensor noise. Finally, through an experiment in object tracking, we show that even at very low measurement rates, reconstructions using our algorithm possess rich semantic content that can be used for high level inference

    Multi-Structural Signal Recovery for Biomedical Compressive Sensing

    Full text link
    Compressive sensing has shown significant promise in biomedical fields. It reconstructs a signal from sub-Nyquist random linear measurements. Classical methods only exploit the sparsity in one domain. A lot of biomedical signals have additional structures, such as multi-sparsity in different domains, piecewise smoothness, low rank, etc. We propose a framework to exploit all the available structure information. A new convex programming problem is generated with multiple convex structure-inducing constraints and the linear measurement fitting constraint. With additional a priori information for solving the underdetermined system, the signal recovery performance can be improved. In numerical experiments, we compare the proposed method with classical methods. Both simulated data and real-life biomedical data are used. Results show that the newly proposed method achieves better reconstruction accuracy performance in term of both L1 and L2 errors.Comment: 29 pages, 20 figures, accepted by IEEE Transactions on Biomedical Engineering. Online first version: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6519288&tag=

    Scan-based Compressed Terahertz Imaging and Real-Time Reconstruction via the Complex-valued Fast Block Sparse Bayesian Learning Algorithm

    Full text link
    Compressed Sensing based Terahertz imaging (CS-THz) is a computational imaging technique. It uses only one THz receiver to accumulate the random modulated image measurements where the original THz image is reconstruct from these measurements using compressed sensing solvers. The advantage of the CS-THz is its reduced acquisition time compared with the raster scan mode. However, when it applied to large-scale two-dimensional (2D) imaging, the increased dimension resulted in both high computational complexity and excessive memory usage. In this paper, we introduced a novel CS-based THz imaging system that progressively compressed the THz image column by column. Therefore, the CS-THz system could be simplified with a much smaller sized modulator and reduced dimension. In order to utilize the block structure and the correlation of adjacent columns of the THz image, a complex-valued block sparse Bayesian learning algorithm was proposed. We conducted systematic evaluation of state-of-the-art CS algorithms under the scan based CS-THz architecture. The compression ratios and the choices of the sensing matrices were analyzed in detail using both synthetic and real-life THz images. Simulation results showed that both the scan based architecture and the proposed recovery algorithm were superior and efficient for large scale CS-THz applications
    • …
    corecore