34,667 research outputs found

    Low complexity object detection with background subtraction for intelligent remote monitoring

    Get PDF

    Logarithmic intensity and speckle-based motion contrast methods for human retinal vasculature visualization using swept source optical coherence tomography

    Get PDF
    We formulate a theory to show that the statistics of OCT signal amplitude and intensity are highly dependent on the sample reflectivity strength, motion, and noise power. Our theoretical and experimental results depict the lack of speckle amplitude and intensity contrasts to differentiate regions of motion from static areas. Two logarithmic intensity-based contrasts, logarithmic intensity variance (LOGIV) and differential logarithmic intensity variance (DLOGIV), are proposed for serving as surrogate markers for motion with enhanced sensitivity. Our findings demonstrate a good agreement between the theoretical and experimental results for logarithmic intensity-based contrasts. Logarithmic intensity-based motion and speckle-based contrast methods are validated and compared for in vivo human retinal vasculature visualization using high-speed swept-source optical coherence tomography (SS-OCT) at 1060 nm. The vasculature was identified as regions of motion by creating LOGIV and DLOGIV tomograms: multiple B-scans were collected of individual slices through the retina and the variance of logarithmic intensities and differences of logarithmic intensities were calculated. Both methods captured the small vessels and the meshwork of capillaries associated with the inner retina in en face images over 4 mm^2 in a normal subject

    Computational illumination for high-speed in vitro Fourier ptychographic microscopy

    Full text link
    We demonstrate a new computational illumination technique that achieves large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either large field of view (FOV) or high resolution, not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both wide FOV and high resolution, i.e. large space-bandwidth product (SBP). FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (on the order of minutes), limiting throughput. Faster capture times would not only improve imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g. pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with sub-second capture times. We propose an improved algorithm and new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time

    Gemini Planet Imager Observational Calibrations III: Empirical Measurement Methods and Applications of High-Resolution Microlens PSFs

    Full text link
    The newly commissioned Gemini Planet Imager (GPI) combines extreme adaptive optics, an advanced coronagraph, precision wavefront control and a lenslet-based integral field spectrograph (IFS) to measure the spectra of young extrasolar giant planets between 0.9-2.5 um. Each GPI detector image, when in spectral model, consists of ~37,000 microspectra which are under or critically sampled in the spatial direction. This paper demonstrates how to obtain high-resolution microlens PSFs and discusses their use in enhancing the wavelength calibration, flexure compensation and spectral extraction. This method is generally applicable to any lenslet-based integral field spectrograph including proposed future instrument concepts for space missions.Comment: 10 pages, 6 figures. Proceedings of the SPIE, 9147-282 v2: reference adde

    Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks

    Get PDF
    Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet-of-Things applications, surveillance systems and semantic crawlers of large video repositories, the video capture and the CNN-based semantic analysis parts do not tend to be co-located. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification models that directly ingest AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video bitstreams and applying complex optical flow calculations prior to CNN processing, we only retain motion vector and select texture information at significantly-reduced bitrates and apply no additional processing prior to CNN ingestion. Based on three CNN architectures and two action recognition datasets, we achieve 11%-94% saving in bitrate with marginal effect on classification accuracy. A model-based selection between multiple CNNs increases these savings further, to the point where, if up to 7% loss of accuracy can be tolerated, video classification can take place with as little as 3 kbps for the transport of the required compressed video information to the system implementing the CNN models

    Novel image processing algorithms and methods for improving their robustness and operational performance

    Get PDF
    Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels. [Continues.
    • …
    corecore