20 research outputs found

    High-resolution adaptive imaging with a single photodiode

    Get PDF
    During the past few years, the emergence of spatial light modulators operating at the tens of kHz has enabled new imaging modalities based on single-pixel photodetectors. The nature of single-pixel imaging enforces a reciprocal relationship between frame rate and image size. Compressive imaging methods allow images to be reconstructed from a number of projections that is only a fraction of the number of pixels. In microscopy, single-pixel imaging is capable of producing images with a moderate size of 128 × 128 pixels at frame rates under one Hz. Recently, there has been considerable interest in the development of advanced techniques for high-resolution real-time operation in applications such as biological microscopy. Here, we introduce an adaptive compressive technique based on wavelet trees within this framework. In our adaptive approach, the resolution of the projecting patterns remains deliberately small, which is crucial to avoid the demanding memory requirements of compressive sensing algorithms. At pattern projection rates of 22.7 kHz, our technique would enable to obtain 128 × 128 pixel images at frame rates around 3 Hz. In our experiments, we have demonstrated a cost-effective solution employing a commercial projection display

    Route to intelligent imaging reconstruction via terahertz nonlinear ghost imaging

    Get PDF
    Terahertz (THz) imaging is a rapidly emerging field, thanks to many potential applications in diagnostics, manufacturing, medicine and material characterisation. However, the relatively coarse resolution stemming from the large wavelength limits the deployment of THz imaging in micro- and nano-technologies, keeping its potential benefits out-of-reach in many practical scenarios and devices. In this context, single-pixel techniques are a promising alternative to imaging arrays, in particular when targeting subwavelength resolutions. In this work, we discuss the key advantages and practical challenges in the implementation of time-resolved nonlinear ghost imaging (TIMING), an imaging technique combining nonlinear THz generation with time-resolved time-domain spectroscopy detection. We numerically demonstrate the high-resolution reconstruction of semi-transparent samples, and we show how the Walsh–Hadamard reconstruction scheme can be optimised to significantly reduce the reconstruction time. We also discuss how, in sharp contrast with traditional intensity-based ghost imaging, the field detection at the heart of TIMING enables high-fidelity image reconstruction via low numerical-aperture detection. Even more striking—and to the best of our knowledge, an issue never tackled before—the general concept of “resolution” of the imaging system as the “smallest feature discernible” appears to be not well suited to describing the fidelity limits of nonlinear ghost-imaging systems. Our results suggest that the drop in reconstruction accuracy stemming from non-ideal detection conditions is complex and not driven by the attenuation of high-frequency spatial components (i.e., blurring) as in standard imaging. On the technological side, we further show how achieving efficient optical-to-terahertz conversion in extremely short propagation lengths is crucial regarding imaging performance, and we propose low-bandgap semiconductors as a practical framework to obtain THz emission from quasi-2D structures, i.e., structure in which the interaction occurs on a deeply subwavelength scale. Our results establish a comprehensive theoretical and experimental framework for the development of a new generation of terahertz hyperspectral imaging devices

    Phase imaging by spatial wavefront sampling

    Get PDF
    Phase-imaging techniques extract the optical path length information of a scene, whereas wavefront sensors provide the shape of an optical wavefront. Since these two applications have different technical requirements, they have developed their own specific technologies. Here we show how to perform phase imaging combining wavefront sampling using a reconfigurable spatial light modulator with a beam position detector. The result is a time-multiplexed detection scheme, capable of being shortened considerably by compressive sensing. This robust referenceless method does not require the phase-unwrapping algorithms demanded by conventional interferometry, and its lenslet-free nature removes trade-offs usually found in Shack–Hartmann sensors

    Advanced Optical Technologies in Food Quality and Waste Management

    Get PDF
    Food waste is a global problem caused in large part by premature food spoilage. Seafood is especially prone to food waste because it spoils easily. Of the annual 4.7 billion pounds of seafood destined for U.S. markets between 2009 and 2013, 40 to 47 percent ended up as waste. This problem is due in large part to a lack of available technologies to enable rapid, accurate, and reliable valorization of food products from boat or farm to table. Fortunately, recent advancements in spectral sensing technologies and spectroscopic analyses show promise for addressing this problem. Not only could these advancements help to solve hunger issues in impoverished regions of the globe, but they could also benefit the average consumer by enabling intelligent pricing of food products based on projected shelf life. Additional technologies that enforce trust and compliance (e.g., blockchain) could further serve to prevent food fraud by maintaining records of spoilage conditions and other quality validation at all points along the food supply chain and provide improved transparency as regards contract performance and attribution of liability. In this chapter we discuss technologies that have enabled the development of hand-held spectroscopic devices for detecting food spoilage. We also discuss some of the analytical methods used to classify and quantify spoilage based on spectral measurements

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined

    Arrayed LiDAR signal analysis for automotive applications

    Get PDF
    Light detection and ranging (LiDAR) is one of the enabling technologies for advanced driver assistance and autonomy. Advances in solid-state photon detector arrays offer the potential of high-performance LiDAR systems but require novel signal processing approaches to fully exploit the dramatic increase in data volume an arrayed detector can provide. This thesis presents two approaches applicable to arrayed solid-state LiDAR. First, a novel block independent sparse depth reconstruction framework is developed, which utilises a random and very sparse illumination scheme to reduce illumination density while improving sampling times, which further remain constant for any array size. Compressive sensing (CS) principles are used to reconstruct depth information from small measurement subsets. The smaller problem size of blocks reduces the reconstruction complexity, improves compressive depth reconstruction performance and enables fast concurrent processing. A feasibility study of a system proposal for this approach demonstrates that the required logic could be practically implemented within detector size constraints. Second, a novel deep learning architecture called LiDARNet is presented to localise surface returns from LiDAR waveforms with high throughput. This single data driven processing approach can unify a wide range of scenarios, making use of a training-by-simulation methodology. This augments real datasets with challenging simulated conditions such as multiple returns and high noise variance, while enabling rapid prototyping of fast data driven processing approaches for arrayed LiDAR systems. Both approaches are fast and practical processing methodologies for arrayed LiDAR systems. These retrieve depth information with excellent depth resolution for wide operating ranges, and are demonstrated on real and simulated data. LiDARNet is a rapid approach to determine surface locations from LiDAR waveforms for efficient point cloud generation, while block sparse depth reconstruction is an efficient method to facilitate high-resolution depth maps at high frame rates with reduced power and memory requirements.Engineering and Physical Sciences Research Council (EPSRC

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
    corecore