556 research outputs found

    Adaptive-Rate Compressive Sensing Using Side Information

    Full text link
    We provide two novel adaptive-rate compressive sensing (CS) strategies for sparse, time-varying signals using side information. Our first method utilizes extra cross-validation measurements, and the second one exploits extra low-resolution measurements. Unlike the majority of current CS techniques, we do not assume that we know an upper bound on the number of significant coefficients that comprise the images in the video sequence. Instead, we use the side information to predict the number of significant coefficients in the signal at the next time instant. For each image in the video sequence, our techniques specify a fixed number of spatially-multiplexed CS measurements to acquire, and adjust this quantity from image to image. Our strategies are developed in the specific context of background subtraction for surveillance video, and we experimentally validate the proposed methods on real video sequences

    Compressive sensing for 3D microwave imaging systems

    Get PDF
    Compressed sensing (CS) image reconstruction techniques are developed and experimentally implemented for wideband microwave synthetic aperture radar (SAR) imaging systems with applications to nondestructive testing and evaluation. These techniques significantly reduce the number of spatial measurement points and, consequently, the acquisition time by sampling at a level lower than the Nyquist-Shannon rate. Benefiting from a reduced number of samples, this work successfully implemented two scanning procedures: the nonuniform raster and the optimum path. Three CS reconstruction approaches are also proposed for the wideband microwave SAR-based imaging systems. The first approach reconstructs a full-set of raw data from undersampled measurements via L1-norm optimization and consequently applies 3D forward SAR on the reconstructed raw data. The second proposed approach employs forward SAR and reverse SAR (R-SAR) transforms in each L1-norm optimization iteration reconstructing images directly. This dissertation proposes a simple, elegant truncation repair method to combat the truncation error which is a critical obstacle to the convergence of the CS iterative algorithm. The third proposed CS reconstruction algorithm is the adaptive basis selection (ABS) compressed sensing. Rather than a fixed sparsifying basis, the proposed ABS method adaptively selects the best basis from a set of bases in each iteration of the L1-norm optimization according to a proposed decision metric that is derived from the sparsity of the image and the coherence between the measurement and sparsifying matrices. The results of several experiments indicate that the proposed algorithms recover 2D and 3D SAR images with only 20% of the spatial points and reduce the acquisition time by up to 66% of that of conventional methods while maintaining or improving the quality of the SAR images --Abstract, page iv

    Compressed sensing in fluorescence microscopy.

    Get PDF
    Compressed sensing (CS) is a signal processing approach that solves ill-posed inverse problems, from under-sampled data with respect to the Nyquist criterium. CS exploits sparsity constraints based on the knowledge of prior information, relative to the structure of the object in the spatial or other domains. It is commonly used in image and video compression as well as in scientific and medical applications, including computed tomography and magnetic resonance imaging. In the field of fluorescence microscopy, it has been demonstrated to be valuable for fast and high-resolution imaging, from single-molecule localization, super-resolution to light-sheet microscopy. Furthermore, CS has found remarkable applications in the field of mesoscopic imaging, facilitating the study of small animals' organs and entire organisms. This review article illustrates the working principles of CS, its implementations in optical imaging and discusses several relevant uses of CS in the field of fluorescence imaging from super-resolution microscopy to mesoscopy

    Signal processing for microwave imaging systems with very sparse array

    Get PDF
    This dissertation investigates image reconstruction algorithms for near-field, two dimensional (2D) synthetic aperture radar (SAR) using compressed sensing (CS) based methods. In conventional SAR imaging systems, acquiring higher-quality images requires longer measuring time and/or more elements in an antenna array. Millimeter wave imaging systems using evenly-spaced antenna arrays also have spatial resolution constraints due to the large size of the antennas. This dissertation applies the CS principle to a bistatic antenna array that consists of separate transmitter and receiver subarrays very sparsely and non-uniformly distributed on a 2D plane. One pair of transmitter and receiver elements is turned on at a time, and different pairs are turned on in series to achieve synthetic aperture and controlled random measurements. This dissertation contributes to CS-hardware co-design by proposing several signal-processing methods, including monostatic approximation, re-gridding, adaptive interpolation, CS-based reconstruction, and image denoising. The proposed algorithms enable the successful implementation of CS-SAR hardware cameras, improve the resolution and image quality, and reduce hardware cost and experiment time. This dissertation also describes and analyzes the results for each independent method. The algorithms proposed in this dissertation break the limitations of hardware configuration. By using 16 x 16 transmit and receive elements with an average space of 16 mm, the sparse-array camera achieves the image resolution of 2 mm. This is equivalent to six percent of the λ/4 evenly-spaced array. The reconstructed images achieve similar quality as the fully-sampled array with the structure similarity (SSIM) larger than 0.8 and peak signal-to-noise ratio (PSNR) greater than 25 --Abstract, page iv

    Object reconstruction from adaptive compressive measurements in feature-specific imaging

    Get PDF
    Static feature-specific imaging (SFSI), where the measurement basis remains fixed/static during the data measurement process, has been shown to be superior to conventional imaging for reconstruction tasks. Here, we describe an adaptive approach that utilizes past measurements to inform the choice of measurement basis for future measurements in an FSI system, with the goal of maximizing the reconstruction fidelity while employing the fewest measurements. An algorithm to implement this adaptive approach is developed for FSI systems, and the resulting systems are referred to as adaptive FSI (AFSI) systems. A simulation study is used to analyze the performance of the AFSI system for two choices of measurement basis: principal component (PC) and Hadamard. Here, the root mean squared error (RMSE) metric is employed to quantify the reconstruction fidelity. We observe that an AFSI system achieves as much as 30% lower RMSE compared to an SFSI system. The performance improvement of the AFSI systems is verified using an experimental setup employed using a digital micromirror device (DMD) array.published_or_final_versio

    Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Get PDF
    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.Comment: 31 pages, 47 figure

    Roadmap on optical security

    Get PDF
    Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Pérez-Cabré], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.Centro de Investigaciones ÓpticasConsejo Nacional de Investigaciones Científicas y Técnica

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train
    • …
    corecore