33,040 research outputs found

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017

    Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    Get PDF
    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enables a fundamental reduction in the track length and volume of an imaging system, while also enabling use of low-cost lens materials.Comment: Supplementary multimedia material in http://dx.doi.org/10.6084/m9.figshare.530302

    Multiplane 3D superresolution optical fluctuation imaging

    Get PDF
    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed C2C12 cells over a total volume of 65×65×3.5μm365\times65\times3.5 \mu\textrm{m}^3 without depth scanning.Comment: 7 pages, 3 figure

    Fibre imaging bundles for full-field optical coherence tomography

    Get PDF
    An imaging fibre bundle is incorporated into a full-field imaging OCT system, with the aim of eliminating the mechanical scanning currently required at the probe tip in endoscopic systems. Each fibre within the imaging bundle addresses a Fizeau interferometer formed between the bundle end and the sample, a configuration which ensures down lead insensitivity of the probe fibres, preventing variations in sensitivity due to polarization changes in the many thousand constituent fibres. The technique allows acquisition of information across a planar region with single-shot measurement, in the form of a 2D image detected using a digital CCD camera. Depth scanning components are now confined within a processing interferometer external to the completely passive endoscope probe. The technique has been evaluated in our laboratory for test samples, and images acquired using the bundle-based system are presented. Data are displayed either as en-face scans, parallel to the sample surface, or as slices through the depth of the sample, with a spatial resolution of about 30 ï ­m. The minimum detectable reflectivity at present is estimated to be about 10-3, which is satisfactory for many inorganic samples. Methods of improving the signal-to- noise ratio for imaging of lower reflectivity samples are discuss

    NIMBUS: The Near-Infrared Multi-Band Ultraprecise Spectroimager for SOFIA

    Get PDF
    We present a new and innovative near-infrared multi-band ultraprecise spectroimager (NIMBUS) for SOFIA. This design is capable of characterizing a large sample of extrasolar planet atmospheres by measuring elemental and molecular abundances during primary transit and occultation. This wide-field spectroimager would also provide new insights into Trans-Neptunian Objects (TNO), Solar System occultations, brown dwarf atmospheres, carbon chemistry in globular clusters, chemical gradients in nearby galaxies, and galaxy photometric redshifts. NIMBUS would be the premier ultraprecise spectroimager by taking advantage of the SOFIA observatory and state of the art infrared technologies. This optical design splits the beam into eight separate spectral bandpasses, centered around key molecular bands from 1 to 4 microns. Each spectral channel has a wide field of view for simultaneous observations of a reference star that can decorrelate time-variable atmospheric and optical assembly effects, allowing the instrument to achieve ultraprecise calibration for imaging and photometry for a wide variety of astrophysical sources. NIMBUS produces the same data products as a low-resolution integral field spectrograph over a large spectral bandpass, but this design obviates many of the problems that preclude high-precision measurements with traditional slit and integral field spectrographs. This instrument concept is currently not funded for development.Comment: 14 pages, 9 figures, SPIE Astronomical Telescopes and Instrumentation 201

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    Investigation of a new method for improving image resolution for camera tracking applications

    Get PDF
    Camera based systems have been a preferred choice in many motion tracking applications due to the ease of installation and the ability to work in unprepared environments. The concept of these systems is based on extracting image information (colour and shape properties) to detect the object location. However, the resolution of the image and the camera field-of- view (FOV) are two main factors that can restrict the tracking applications for which these systems can be used. Resolution can be addressed partially by using higher resolution cameras but this may not always be possible or cost effective. This research paper investigates a new method utilising averaging of offset images to improve the effective resolution using a standard camera. The initial results show that the minimum detectable position change of a tracked object could be improved by up to 4 times
    • …
    corecore