93,493 research outputs found

    Microwave quantum illumination using a digital receiver

    Get PDF
    Quantum illumination is a powerful sensing technique that employs entangled signal-idler photon pairs to boost the detection efficiency of low-reflectivity objects in environments with bright thermal noise. The promised advantage over classical strategies is particularly evident at low signal powers, a feature which could make the protocol an ideal prototype for non-invasive biomedical scanning or low-power short-range radar. In this work we experimentally investigate the concept of quantum illumination at microwave frequencies. We generate entangled fields using a Josephson parametric converter to illuminate a room-temperature object at a distance of 1 meter in a free-space detection setup. We implement a digital phase conjugate receiver based on linear quadrature measurements that outperforms a symmetric classical noise radar in the same conditions despite the entanglement-breaking signal path. Starting from experimental data, we also simulate the case of perfect idler photon number detection, which results in a quantum advantage compared to the relative classical benchmark. Our results highlight the opportunities and challenges on the way towards a first room-temperature application of microwave quantum circuits

    Using Lidar to geometrically-constrain signature spaces for physics-based target detection

    Get PDF
    A fundamental task when performing target detection on spectral imagery is ensuring that a target signature is in the same metric domain as the measured spectral data set. Remotely sensed data are typically collected in digital counts and calibrated to radiance. That is, calibrated data have units of spectral radiance, while target signatures in the visible regime are commonly characterized in units of re°ectance. A necessary precursor to running a target detection algorithm is converting the measured scene data and target signature to the same domain. Atmospheric inversion or compensation is a well-known method for transforming mea- sured scene radiance values into the re°ectance domain. While this method may be math- ematically trivial, it is computationally attractive and is most e®ective when illumination conditions are constant across a scene. However, when illumination conditions are not con- stant for a given scene, signi¯cant error may be introduced when applying the same linear inversion globally. In contrast to the inversion methodology, physics-based forward modeling approaches aim to predict the possible ways that a target might appear in a scene using atmospheric and radiometric models. To fully encompass possible target variability due to changing illumination levels, a target vector space is created. In addition to accounting for varying illumination, physics-based model approaches have a distinct advantage in that they can also incorporate target variability due to a variety of other sources, to include adjacency target orientation, and mixed pixels. Increasing the variability of the target vector space may be beneficial in a global sense in that it may allow for the detection of difficult targets, such as shadowed or partially concealed targets. However, it should also be noted that expansion of the target space may introduce unnecessary confusion for a given pixel. Furthermore, traditional physics-based approaches make certain assumptions which may be prudent only when passive, spectral data for a scene are available. Common examples include the assumption of a °at ground plane and pure target pixels. Many of these assumptions may be attributed to the lack of three-dimensional (3D) spatial information for the scene. In the event that 3D spatial information were available, certain assumptions could be levied, allowing accurate geometric information to be fed to the physics-based model on a pixel- by-pixel basis. Doing so may e®ectively constrain the physics-based model, resulting in a pixel-specific target space with optimized variability and minimized confusion. This body of work explores using spatial information from a topographic Light Detection and Ranging (Lidar) system as a means to enhance the delity of physics-based models for spectral target detection. The incorporation of subpixel spatial information, relative to a hyperspectral image (HSI) pixel, provides valuable insight about plausible geometric con¯gurations of a target, background, and illumination sources within a scene. Methods for estimating local geometry on a per-pixel basis are introduced; this spatial information is then fed into a physics-based model to the forward prediction of a target in radiance space. The target detection performance based on this spatially-enhanced, spectral target space is assessed relative to current state-of-the-art spectral algorithms

    2D Face Recognition System Based on Selected Gabor Filters and Linear Discriminant Analysis LDA

    Full text link
    We present a new approach for face recognition system. The method is based on 2D face image features using subset of non-correlated and Orthogonal Gabor Filters instead of using the whole Gabor Filter Bank, then compressing the output feature vector using Linear Discriminant Analysis (LDA). The face image has been enhanced using multi stage image processing technique to normalize it and compensate for illumination variation. Experimental results show that the proposed system is effective for both dimension reduction and good recognition performance when compared to the complete Gabor filter bank. The system has been tested using CASIA, ORL and Cropped YaleB 2D face images Databases and achieved average recognition rate of 98.9 %

    ORGB: Offset Correction in RGB Color Space for Illumination-Robust Image Processing

    Full text link
    Single materials have colors which form straight lines in RGB space. However, in severe shadow cases, those lines do not intersect the origin, which is inconsistent with the description of most literature. This paper is concerned with the detection and correction of the offset between the intersection and origin. First, we analyze the reason for forming that offset via an optical imaging model. Second, we present a simple and effective way to detect and remove the offset. The resulting images, named ORGB, have almost the same appearance as the original RGB images while are more illumination-robust for color space conversion. Besides, image processing using ORGB instead of RGB is free from the interference of shadows. Finally, the proposed offset correction method is applied to road detection task, improving the performance both in quantitative and qualitative evaluations.Comment: Project website: https://baidut.github.io/ORGB
    corecore