86 research outputs found

    Photon counting compressive depth mapping

    Get PDF
    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 x 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 x 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.Comment: 16 pages, 8 figure

    Efficient high-dimensional entanglement imaging with a compressive sensing, double-pixel camera

    Get PDF
    We implement a double-pixel, compressive sensing camera to efficiently characterize, at high resolution, the spatially entangled fields produced by spontaneous parametric downconversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster-scanning by a scaling factor up to n^2/log(n) for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates compressive sensing can be especially effective for higher-order measurements on correlated systems.Comment: 10 pages, 7 figure

    Compressive Point Cloud Super Resolution

    Get PDF
    Automatic target recognition (ATR) is the ability for a computer to discriminate between different objects in a scene. ATR is often performed on point cloud data from a sensor known as a Ladar. Increasing the resolution of this point cloud in order to get a more clear view of the object in a scene would be of significant interest in an ATR application. A technique to increase the resolution of a scene is known as super resolution. This technique requires many low resolution images that can be combined together. In recent years, however, it has become possible to perform super resolution on a single image. This thesis sought to apply Gabor Wavelets and Compressive Sensing to single image super resolution of digital images of natural scenes. The technique applied to images was then extended to allow the super resolution of a point cloud

    Real-time computational photon-counting LiDAR

    Get PDF
    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles

    Quantitative thermal imaging using single-pixel Si APD and MEMS mirror

    Get PDF
    Accurate quantitative temperature measurements are difficult to achieve using focal-plane array sensors. This is due to reflections inside the instrument and the difficulty of calibrating a matrix of pixels as identical radiation thermometers. Size-of-source effect (SSE), which is the dependence of an infrared temperature measurement on the area surrounding the target area, is a major contributor to this problem and cannot be reduced using glare stops. Measurements are affected by power received from outside the field-of-view (FOV), leading to increased measurement uncertainty. In this work, we present a micromechanical systems (MEMS) mirror based scanning thermal imaging camera with reduced measurement uncertainty compared to focal-plane array based systems. We demonstrate our flexible imaging approach using a Si avalanche photodiode (APD), which utilises high internal gain to enable the measurement of lower target temperatures with an effective wavelength of 1 ”m and compare results with a Si photodiode. We compare measurements from our APD thermal imaging instrument against a commercial bolometer based focal-plane array camera. Our scanning approach results in a reduction in SSE related temperature error by 66 °C for the measurement of a spatially uniform 800 °C target when the target aperture diameter is increased from 10 to 20 mm. We also find that our APD instrument is capable of measuring target temperatures below 700 °C, over these near infrared wavelengths, with D* related measurement uncertainty of ± 0.5 °C

    Photon Counting Compressive Depth Mapping

    Get PDF
    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second

    Real applications of quantum imaging

    Get PDF
    In the last years the possibility of creating and manipulating quantum states of light has paved the way to the development of new technologies exploiting peculiar properties of quantum states, as quantum information, quantum metrology & sensing, quantum imaging ... In particular Quantum Imaging addresses the possibility of overcoming limits of classical optics by using quantum resources as entanglement or sub-poissonian statistics. Albeit quantum imaging is a more recent field than other quantum technologies, e.g. quantum information, it is now substantially mature for application. Several different protocols have been proposed, some of them only theoretically, others with an experimental implementation and a few of them pointing to a clear application. Here we present a few of the most mature protocols ranging from ghost imaging to sub shot noise imaging and sub Rayleigh imaging.Comment: REVIEW PAPE

    DEEP INFERENCE ON MULTI-SENSOR DATA

    Get PDF
    Computer vision-based intelligent autonomous systems engage various types of sensors to perceive the world they navigate in. Vision systems perceive their environments through inferences on entities (structures, humans) and their attributes (pose, shape, materials) that are sensed using RGB and Near-InfraRed (NIR) cameras, LAser Detection And Ranging (LADAR), radar and so on. This leads to challenging and interesting problems in efficient data-capture, feature extraction, and attribute estimation, not only for RGB but various other sensors. In some cases, we encounter very limited amounts of labeled training data. In certain other scenarios we have sufficient data, but annotations are unavailable for supervised learning. This dissertation explores two approaches to learning under conditions of minimal to no ground truth. The first approach applies projections on training data that make learning efficient by improving training dynamics. The first and second topics in this dissertation belong to this category. The second approach makes learning without ground-truth possible via knowledge transfer from a labeled source domain to an unlabeled target domain through projections to domain-invariant shared latent spaces. The third and fourth topics in this dissertation belong to this category. For the first topic we study the feasibility and efficacy of identifying shapes in LADAR data in several measurement modes. We present results on efficient parameter learning with less data (for both traditional machine learning as well as deep models) on LADAR images. We use a LADAR apparatus to obtain range information from a 3-D scene by emitting laser beams and collecting the reflected rays from target objects in the region of interest. The Agile Beam LADAR concept makes the measurement and interpretation process more efficient using a software-defined architecture that leverages computational imaging principles. Using these techniques, we show that object identification and scene understanding can be accurately performed in the LADARmeasurement domain thereby rendering the efforts of pixel-based scene reconstruction superfluous. Next, we explore the effectiveness of deep features extracted by Convolutional Neural Networks (CNNs) in the Discrete Cosine Transform (DCT) domain for various image classification tasks such as pedestrian and face detection, material identification and object recognition. We perform the DCT operation on the feature maps generated by convolutional layers in CNNs. We compare the performance of the same network with the same hyper-parameters with or without the DCT step. Our results indicate that a DCT operation incorporated into the network after the first convolution layer can have certain advantages such as convergence over fewer training epochs and sparser weight matrices that are more conducive to pruning and hashing techniques. Next, we present an adversarial deep domain adaptation (ADA)-based approach for training deep neural networks that fit 3Dmeshes on humans in monocular RGB input images. Estimating a 3D mesh from a 2D image is helpful in harvesting complete 3Dinformation about body pose and shape. However, learning such an estimation task in a supervised way is challenging owing to the fact that ground truth 3D mesh parameters for real humans do not exist. We propose a domain adaptation based single-shot (no re-projection, no iterative refinement), end-to-end training approach with joint optimization on real and synthetic images on a shared common task. Through joint inference on real and synthetic data, the network extracts domain invariant features that are further used to estimate the 3D mesh parameters in a single shot with no supervision on real samples. While we compute regression loss on synthetic samples with ground truth mesh parameters, knowledge is transferred from synthetic to real data through ADA without direct ground truth for supervision. Finally, we propose a partially supervised method for satellite image super-resolution by learning a unified representation of samples from different domains (captured by different sensors) in a shared latent space. The training samples are drawn from two datasets which we refer to as source and target domains. The source domain consists of fewer samples which are of higher resolution and contain very detailed and accurate annotations. In contrast, samples from the target domain are low-resolution and available ground truth is sparse. The pipeline consists of a feature extractor and a super-resolving module which are trained end-to-end. Using a deep feature extractor, we jointly learn (on two datasets) a common embedding space for all samples. Partial supervision is available for the samples in the source domain which have high-resolution ground truth. Adversarial supervision is used to successfully super-resolve low-resolution RGB satellite imagery from target domain without direct paired supervision from high resolution counterparts

    Quantum Communication, Sensing and Measurement in Space

    Get PDF
    The main theme of the conclusions drawn for classical communication systems operating at optical or higher frequencies is that there is a well‐understood performance gain in photon efficiency (bits/photon) and spectral efficiency (bits/s/Hz) by pursuing coherent‐state transmitters (classical ideal laser light) coupled with novel quantum receiver systems operating near the Holevo limit (e.g., joint detection receivers). However, recent research indicates that these receivers will require nonlinear and nonclassical optical processes and components at the receiver. Consequently, the implementation complexity of Holevo‐capacityapproaching receivers is not yet fully ascertained. Nonetheless, because the potential gain is significant (e.g., the projected photon efficiency and data rate of MIT Lincoln Laboratory's Lunar Lasercom Demonstration (LLCD) could be achieved with a factor‐of‐20 reduction in the modulation bandwidth requirement), focused research activities on ground‐receiver architectures that approach the Holevo limit in space‐communication links would be beneficial. The potential gains resulting from quantum‐enhanced sensing systems in space applications have not been laid out as concretely as some of the other areas addressed in our study. In particular, while the study period has produced several interesting high‐risk and high‐payoff avenues of research, more detailed seedlinglevel investigations are required to fully delineate the potential return relative to the state‐of‐the‐art. Two prominent examples are (1) improvements to pointing, acquisition and tracking systems (e.g., for optical communication systems) by way of quantum measurements, and (2) possible weak‐valued measurement techniques to attain high‐accuracy sensing systems for in situ or remote‐sensing instruments. While these concepts are technically sound and have very promising bench‐top demonstrations in a lab environment, they are not mature enough to realistically evaluate their performance in a space‐based application. Therefore, it is recommended that future work follow small focused efforts towards incorporating practical constraints imposed by a space environment. The space platform has been well recognized as a nearly ideal environment for some of the most precise tests of fundamental physics, and the ensuing potential of scientific advances enabled by quantum technologies is evident in our report. For example, an exciting concept that has emerged for gravity‐wave detection is that the intermediate frequency band spanning 0.01 to 10 Hz—which is inaccessible from the ground—could be accessed at unprecedented sensitivity with a space‐based interferometer that uses shorter arms relative to state‐of‐the‐art to keep the diffraction losses low, and employs frequency‐dependent squeezed light to surpass the standard quantum limit sensitivity. This offers the potential to open up a new window into the universe, revealing the behavior of compact astrophysical objects and pulsars. As another set of examples, research accomplishments in the atomic and optics fields in recent years have ushered in a number of novel clocks and sensors that can achieve unprecedented measurement precisions. These emerging technologies promise new possibilities in fundamental physics, examples of which are tests of relativistic gravity theory, universality of free fall, frame‐dragging precession, the gravitational inverse‐square law at micron scale, and new ways of gravitational wave detection with atomic inertial sensors. While the relevant technologies and their discovery potentials have been well demonstrated on the ground, there exists a large gap to space‐based systems. To bridge this gap and to advance fundamental‐physics exploration in space, focused investments that further mature promising technologies, such as space‐based atomic clocks and quantum sensors based on atom‐wave interferometers, are recommended. Bringing a group of experts from diverse technical backgrounds together in a productive interactive environment spurred some unanticipated innovative concepts. One promising concept is the possibility of utilizing a space‐based interferometer as a frequency reference for terrestrial precision measurements. Space‐based gravitational wave detectors depend on extraordinarily low noise in the separation between spacecraft, resulting in an ultra‐stable frequency reference that is several orders of magnitude better than the state of the art of frequency references using terrestrial technology. The next steps in developing this promising new concept are simulations and measurement of atmospheric effects that may limit performance due to non‐reciprocal phase fluctuations. In summary, this report covers a broad spectrum of possible new opportunities in space science, as well as enhancements in the performance of communication and sensing technologies, based on observing, manipulating and exploiting the quantum‐mechanical nature of our universe. In our study we identified a range of exciting new opportunities to capture the revolutionary capabilities resulting from quantum enhancements. We believe that pursuing these opportunities has the potential to positively impact the NASA mission in both the near term and in the long term. In this report we lay out the research and development paths that we believe are necessary to realize these opportunities and capitalize on the gains quantum technologies can offer

    Event-based processing of single photon avalanche diode sensors

    Get PDF
    Single Photon Avalanche Diode sensor arrays operating in direct time of flight mode can perform 3D imaging using pulsed lasers. Operating at high frame rates, SPAD imagers typically generate large volumes of noisy and largely redundant spatio-temporal data. This results in communication bottlenecks and unnecessary data processing. In this work, we propose a neuromorphic processing solution to this problem. By processing the spatio-temporal patterns generated by the SPADs in a local, event-based manner, the proposed 128 imes 128 pixel sensor-processor system reduces the size of output data from the sensor by orders of magnitude while increasing the utility of the output data in the context of challenging recognition tasks. To test the proposed system, the first large scale complex SPAD imaging dataset is captured using an existing 32 imes 32 pixel sensor. The generated dataset consists of 24000 recordings and involves high-speed view-invariant recognition of airplanes with background clutter. The frame-based SPAD imaging dataset is converted via several alternative methods into event-based data streams and processed using the proposed 125 imes 125 receptive field neuromorphic processor as well as a range of feature extractor networks and pooling methods. The output of the proposed event generation methods are then processed by an event-based feature extraction and classification system implemented in FPGA hardware. The event-based processing methods are compared to processing the original frame-based dataset via frame-based but otherwise identical architectures. The results show the event-based methods are superior to the frame-based approach both in terms of classification accuracy and output data-rate
    • 

    corecore