280 research outputs found

    High-Dimensional Information Detection based on Correlation Imaging Theory

    Get PDF
    Radar is a device that uses electromagnetic(EM) waves to detect targets; it can measure the position parameters and motion parameters and extract target characteristics information by analyzing the reflected signal from the target. From the perspective of the radar theoretical basis of physics, the more than 70 years of development of radar are based on the EM field fluctuation theory of physics. Many theories have been developed towards one-dimensional signal processing. For example, a variety of threshold filtering have widely used as methods to resist interference during detection. The optimal state estimation describes the propagation process of the statistical characteristics of the target over time in the probability domain. Compressed sensing greatly improves the reconstructing efficiency of the sparse signal. These theories are one-dimensional information processing. The information obtained by them is a deterministic description of the EM field. The correlated imaging technique is from the high-order coherence property of the EM field, which uses the fluctuation characteristic of the EM field to realize non-local imaging. Correlated imaging radar, a combination of correlated imaging techniques and modern information theory, will provide a novel remote sensing detection and imaging method. More importantly, correlated imaging radar is a new research field. Therefore, a complete theoretical frame and application system should be urgently built up and improved. Based on the coherence theory of the EM field, the work in this thesis explores the method of determining the statistical characteristics of the EM field so that the high dimensional target information can be detected, including theoretical analysis, principle design, imaging modes, target detecting models, image reconstruction algorithms, the enhancement of visibility, and system design. The simulations and real experiments are set up to prove the theory's validity and the systems' feasibility

    COMPARATIVE ANALYSIS OF QUANTUM COMPRESSIVE IMAGING IN LOW PHOTON REGIME BY CONSIDERING PHOTON STATISTICS

    Get PDF
    Compressive Imaging has been an extensively researched area inoptical imaging, object tracking,satellite applications,etc. There are many signalrecovery methods and comparative analyses of different algorithms in the presence of Gaussian noise. However, certain applications such as optical imaging at low photon intensityhavecounts of discrete events, which cannot be modelled using a Gaussian noise model. Instead, anoise model that incorporates photon statistics is needed. Researchers haveworkedon the Poisson noise modeland a different compressive sensing reconstruction was found.In this thesis, weconsidered a more general scenario of Compressive Imaging using non-classical photon states as light sources. We assumed that the Compressive Imaging system that consists of digital micromirror device (DMD), lenses, and detectors are perfect so that all noises comes from photons. Fock states and squeezed light thatpossess non-Poissonian statistics plays animportant role in Quantum Imaging. The image reconstruction was performed using several common compressive sensing signal reconstruction algorithms assuming Gaussian noise. This thesis showedthe behavior of the root mean square error (RMSE)with respectto the signal-to-noise ratio (SNR)for different photon statistics. In particular, the study showedthat all the noises perform similarly for the different algorithms. Based on the performance results for the different light sources, this researchcan be helpful in designing ageneralized Compressive Sensing model incorporating the photon statistics thatareapplicable in the field of Quantum Optics

    Roadmap on optical security

    Get PDF
    Postprint (author's final draft

    Hyperspectral Data Acquisition and Its Application for Face Recognition

    Get PDF
    Current face recognition systems are rife with serious challenges in uncontrolled conditions: e.g., unrestrained lighting, pose variations, accessories, etc. Hyperspectral imaging (HI) is typically employed to counter many of those challenges, by incorporating the spectral information within different bands. Although numerous methods based on hyperspectral imaging have been developed for face recognition with promising results, three fundamental challenges remain: 1) low signal to noise ratios and low intensity values in the bands of the hyperspectral image specifically near blue bands; 2) high dimensionality of hyperspectral data; and 3) inter-band misalignment (IBM) correlated with subject motion during data acquisition. This dissertation concentrates mainly on addressing the aforementioned challenges in HI. First, to address low quality of the bands of the hyperspectral image, we utilize a custom light source that has more radiant power at shorter wavelengths and properly adjust camera exposure times corresponding to lower transmittance of the filter and lower radiant power of our light source. Second, the high dimensionality of spectral data imposes limitations on numerical analysis. As such, there is an emerging demand for robust data compression techniques with lows of less relevant information to manage real spectral data. To cope with these challenging problems, we describe a reduced-order data modeling technique based on local proper orthogonal decomposition in order to compute low-dimensional models by projecting high-dimensional clusters onto subspaces spanned by local reduced-order bases. Third, we investigate 11 leading alignment approaches to address IBM correlated with subject motion during data acquisition. To overcome the limitations of the considered alignment approaches, we propose an accurate alignment approach ( A3) by incorporating the strengths of point correspondence and a low-rank model. In addition, we develop two qualitative prediction models to assess the alignment quality of hyperspectral images in determining improved alignment among the conducted alignment approaches. Finally, we show that the proposed alignment approach leads to promising improvement on face recognition performance of a probabilistic linear discriminant analysis approach

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined

    Randomness as a computational strategy : on matrix and tensor decompositions

    Get PDF
    Matrix and tensor decompositions are fundamental tools for finding structure and data processing. In particular, the efficient computation of low-rank matrix approximations is an ubiquitous problem in the area of machine learning and elsewhere. However, massive data arrays pose a computational challenge for these techniques, placing significant constraints on both memory and processing power. Recently, the fascinating and powerful concept of randomness has been introduced as a strategy to ease the computational load of deterministic matrix and data algorithms. The basic idea of these algorithms is to employ a degree of randomness as part of the logic in order to derive from a high-dimensional input matrix a smaller matrix, which captures the essential information of the original data matrix. Subsequently, the smaller matrix is then used to efficiently compute a near-optimal low-rank approximation. Randomized algorithms have been shown to be robust, highly reliable, and computationally efficient, yet simple to implement. In particular, the development of the randomized singular value decomposition can be seen as a milestone in the era of ‘big data’. Building up on the great success of this probabilistic strategy to compute low-rank matrix decompositions, this thesis introduces a set of new randomized algorithms. Specifically, we present a randomized algorithm to compute the dynamic mode decomposition, which is a modern dimension reduction technique designed to extract dynamic information from dynamical systems. Then, we advocate the randomized dynamic mode decomposition for background modeling of surveillance video feeds. Further, we show that randomized algorithms are embarrassingly parallel by design and that graphics processing units (GPUs) can be utilized to substantially accelerate the computations. Finally, the concept of randomized algorithms is generalized for tensors in order to compute the canonical CANDECOMP/PARAFAC (CP) decomposition

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Roadmap on optical security

    Get PDF
    Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Pérez-Cabré], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.Centro de Investigaciones ÓpticasConsejo Nacional de Investigaciones Científicas y Técnica

    Top-Push Constrained Modality-Adaptive Dictionary Learning for Cross-Modality Person Re-Identification

    Full text link
    • …
    corecore