50,555 research outputs found

    Logarithmic intensity and speckle-based motion contrast methods for human retinal vasculature visualization using swept source optical coherence tomography

    Get PDF
    We formulate a theory to show that the statistics of OCT signal amplitude and intensity are highly dependent on the sample reflectivity strength, motion, and noise power. Our theoretical and experimental results depict the lack of speckle amplitude and intensity contrasts to differentiate regions of motion from static areas. Two logarithmic intensity-based contrasts, logarithmic intensity variance (LOGIV) and differential logarithmic intensity variance (DLOGIV), are proposed for serving as surrogate markers for motion with enhanced sensitivity. Our findings demonstrate a good agreement between the theoretical and experimental results for logarithmic intensity-based contrasts. Logarithmic intensity-based motion and speckle-based contrast methods are validated and compared for in vivo human retinal vasculature visualization using high-speed swept-source optical coherence tomography (SS-OCT) at 1060 nm. The vasculature was identified as regions of motion by creating LOGIV and DLOGIV tomograms: multiple B-scans were collected of individual slices through the retina and the variance of logarithmic intensities and differences of logarithmic intensities were calculated. Both methods captured the small vessels and the meshwork of capillaries associated with the inner retina in en face images over 4 mm^2 in a normal subject

    Differential intensity contrast swept source optical coherence tomography for human retinal vasculature visualization

    Get PDF
    We demonstrate an intensity-based motion sensitive method, called differential logarithmic intensity variance (DLOGIV), for 3D microvasculature imaging and foveal avascular zone (FAZ) visualization in the in vivo human retina using swept source optical coherence tomog. (SS-OCT) at 1060 nm. A motion sensitive SS-OCT system was developed operating at 50,000 A-lines/s with 5.9 μm axial resoln., and used to collect 3D images over 4 mm^2 in a normal subject eye. Multiple B-scans were acquired at each individual slice through the retina and the variance of differences of logarithmic intensities as well as the differential phase variances (DPV) was calcd. to identify regions of motion (microvasculature). En face DLOGIV image were capable of capturing the microvasculature through depth with an equal performance compared to the DPV

    Artimate: an articulatory animation framework for audiovisual speech synthesis

    Get PDF
    We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.Comment: Workshop on Innovation and Applications in Speech Technology (2012

    Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

    Full text link
    Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary materia
    corecore