176,392 research outputs found

    Towards Large Scale CMOS Single-Photon Detector Arrays for Lab-on-Chip Applications

    Get PDF
    Single-photon detection is useful in many domains requiring time-resolved imaging, high sensitivity and high dynamic range. In this paper the miniaturization and performance potential of solid-state single-photon detectors are discussed in the context of lab-on-chip applications where high accuracy and/or high levels of parallelism are suited. Technological and design trade-offs are discussed in view of recent advances in integrated LED matrix technology and the emergence of new multiplication based architectures

    Micro Fourier Transform Profilometry (ÎĽ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (ÎĽ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, ÎĽ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show ÎĽ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Optical Technologies for UV Remote Sensing Instruments

    Get PDF
    Over the last decade significant advances in technology have made possible development of instruments with substantially improved efficiency in the UV spectral region. In the area of optical coatings and materials, the importance of recent developments in chemical vapor deposited (CVD) silicon carbide (SiC) mirrors, SiC films, and multilayer coatings in the context of ultraviolet instrumentation design are discussed. For example, the development of chemically vapor deposited (CVD) silicon carbide (SiC) mirrors, with high ultraviolet (UV) reflectance and low scatter surfaces, provides the opportunity to extend higher spectral/spatial resolution capability into the 50-nm region. Optical coatings for normal incidence diffraction gratings are particularly important for the evolution of efficient extreme ultraviolet (EUV) spectrographs. SiC films are important for optimizing the spectrograph performance in the 90 nm spectral region. The performance evaluation of the flight optical components for the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) instrument, a spectroscopic instrument to fly aboard the Solar and Heliospheric Observatory (SOHO) mission, designed to study dynamic processes, temperatures, and densities in the plasma of the upper atmosphere of the Sun in the wavelength range from 50 nm to 160 nm, is discussed. The optical components were evaluated for imaging and scatter in the UV. The performance evaluation of SOHO/CDS (Coronal Diagnostic Spectrometer) flight gratings tested for spectral resolution and scatter in the DGEF is reviewed and preliminary results on resolution and scatter testing of Space Telescope Imaging Spectrograph (STIS) technology development diffraction gratings are presented

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    Random on-board pixel sampling (ROPS) X-ray Camera

    Full text link
    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.Comment: 9 pages, 6 figures, Presented in 19th iWoRI

    Applications of AFM in pharmaceutical sciences

    Get PDF
    Atomic force microscopy (AFM) is a high-resolution imaging technique that uses a small probe (tip and cantilever) to provide topographical information on surfaces in air or in liquid media. By pushing the tip into the surface or by pulling it away, nanomechanical data such as compliance (stiffness, Young’s Modulus) or adhesion, respectively, may be obtained and can also be presented visually in the form of maps displayed alongside topography images. This chapter outlines the principles of operation of AFM, describing some of the important imaging modes and then focuses on the use of the technique for pharmaceutical research. Areas include tablet coating and dissolution, crystal growth and polymorphism, particles and fibres, nanomedicine, nanotoxicology, drug-protein and protein-protein interactions, live cells, bacterial biofilms and viruses. Specific examples include mapping of ligand-receptor binding on cell surfaces, studies of protein-protein interactions to provide kinetic information and the potential of AFM to be used as an early diagnostic tool for cancer and other diseases. Many of these reported investigations are from 2011-2014, both from the literature and a few selected studies from the authors’ laboratories

    Soliton microcomb based spectral domain optical coherence tomography

    Full text link
    Spectral domain optical coherence tomography (SD-OCT) is a widely used and minimally invaive technique for bio-medical imaging [1]. SD-OCT typically relies on the use of superluminescent diodes (SLD), which provide a low-noise and broadband optical spectrum. Recent advances in photonic chipscale frequency combs [2, 3] based on soliton formation in photonic integrated microresonators provide an chipscale alternative illumination scheme for SD-OCT. Yet to date, the use of such soliton microcombs in OCT has not yet been analyzed. Here we explore the use of soliton microcombs in spectral domain OCT and show that, by using photonic chipscale Si3N4 resonators in conjunction with 1300 nm pump lasers, spectral bandwidths exceeding those of commercial SLDs are possible. We demonstrate that the soliton states in microresonators exhibit a noise floor that is ca. 3 dB lower than for the SLD at identical power, but can exhibit significantly lower noise performance for powers at the milliWatt level. We perform SD-OCT imaging on an ex vivo fixed mouse brain tissue using the soliton microcomb, alongside an SLD for comparison, and demonstrate the principle viability of soliton based SD-OCT. Importantly, we demonstrate that classical amplitude noise of all soliton comb teeth are correlated, i.e. common mode, in contrast to SLD or incoherent microcomb states [4], which should, in theory, improve the image quality. Moreover, we demonstrate the potential for circular ranging, i.e. optical sub-sampling [5, 6], due to the high coherence and temporal periodicity of the soliton state. Taken together, our work indicates the promising properties of soliton microcombs for SD-OCT

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System
    • …
    corecore