98 research outputs found

    Green compressive sampling reconstruction in IoT networks

    Get PDF
    In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks

    Compressively Sensed Image Recognition

    Full text link
    Compressive Sensing (CS) theory asserts that sparse signal reconstruction is possible from a small number of linear measurements. Although CS enables low-cost linear sampling, it requires non-linear and costly reconstruction. Recent literature works show that compressive image classification is possible in CS domain without reconstruction of the signal. In this work, we introduce a DCT base method that extracts binary discriminative features directly from CS measurements. These CS measurements can be obtained by using (i) a random or a pseudo-random measurement matrix, or (ii) a measurement matrix whose elements are learned from the training data to optimize the given classification task. We further introduce feature fusion by concatenating Bag of Words (BoW) representation of our binary features with one of the two state-of-the-art CNN-based feature vectors. We show that our fused feature outperforms the state-of-the-art in both cases.Comment: 6 pages, submitted/accepted, EUVIP 201

    ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing

    Full text link
    With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general â„“1\ell_1 norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed {ISTA-Net}+^+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.Comment: 10 pages, 6 figures, 4 Tables. To appear in CVPR 201

    Reconstruction of ultrasound RF echoes modelled as stable random variables

    Get PDF
    International audienceThis paper introduces a new technique for reconstruction of biomedical ultrasound images from simulated compressive measurements, based on modeling data with stable distributions. The proposed algorithm exploits two types of prior information: on one hand, our proposed approach is based on the observation that ultrasound RF echoes are best characterized statistically by alpha-stable distributions. On the other hand, through knowledge of the acquisition process, the support of the RF echoes in the Fourier domain can be easily inferred. Together, these two facts form the basis of an ℓp minimization approach that employs the iteratively reweighted least squares (IRLS) algorithm, but in which the parameter p is judiciously chosen, by relating it to the characteristic exponent of the underlying alpha-stable distributed data. We demonstrate, through Monte Carlo simulations, that the optimal value of the parameter p is just below that of the characteristic exponent α, which we estimate from the data. Our reconstruction results show that the proposed algorithm outperforms previously proposed reconstruction techniques, both visually and in terms of two objective evaluation measures

    Compressively characterizing high-dimensional entangled states with complementary, random filtering

    Get PDF
    The resources needed to conventionally characterize a quantum system are overwhelmingly large for high- dimensional systems. This obstacle may be overcome by abandoning traditional cornerstones of quantum measurement, such as general quantum states, strong projective measurement, and assumption-free characterization. Following this reasoning, we demonstrate an efficient technique for characterizing high-dimensional, spatial entanglement with one set of measurements. We recover sharp distributions with local, random filtering of the same ensemble in momentum followed by position---something the uncertainty principle forbids for projective measurements. Exploiting the expectation that entangled signals are highly correlated, we use fewer than 5,000 measurements to characterize a 65, 536-dimensional state. Finally, we use entropic inequalities to witness entanglement without a density matrix. Our method represents the sea change unfolding in quantum measurement where methods influenced by the information theory and signal-processing communities replace unscalable, brute-force techniques---a progression previously followed by classical sensing.Comment: 13 pages, 7 figure
    • …
    corecore