81,777 research outputs found

    3D performance capture for facial animation

    Get PDF
    This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence

    Logarithmic intensity and speckle-based motion contrast methods for human retinal vasculature visualization using swept source optical coherence tomography

    Get PDF
    We formulate a theory to show that the statistics of OCT signal amplitude and intensity are highly dependent on the sample reflectivity strength, motion, and noise power. Our theoretical and experimental results depict the lack of speckle amplitude and intensity contrasts to differentiate regions of motion from static areas. Two logarithmic intensity-based contrasts, logarithmic intensity variance (LOGIV) and differential logarithmic intensity variance (DLOGIV), are proposed for serving as surrogate markers for motion with enhanced sensitivity. Our findings demonstrate a good agreement between the theoretical and experimental results for logarithmic intensity-based contrasts. Logarithmic intensity-based motion and speckle-based contrast methods are validated and compared for in vivo human retinal vasculature visualization using high-speed swept-source optical coherence tomography (SS-OCT) at 1060 nm. The vasculature was identified as regions of motion by creating LOGIV and DLOGIV tomograms: multiple B-scans were collected of individual slices through the retina and the variance of logarithmic intensities and differences of logarithmic intensities were calculated. Both methods captured the small vessels and the meshwork of capillaries associated with the inner retina in en face images over 4 mm^2 in a normal subject

    Mapping the spatiotemporal dynamics of calcium signaling in cellular neural networks using optical flow

    Get PDF
    An optical flow gradient algorithm was applied to spontaneously forming net- works of neurons and glia in culture imaged by fluorescence optical microscopy in order to map functional calcium signaling with single pixel resolution. Optical flow estimates the direction and speed of motion of objects in an image between subsequent frames in a recorded digital sequence of images (i.e. a movie). Computed vector field outputs by the algorithm were able to track the spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly reviewing the mathematics of the optical flow algorithm, and then describe how to solve for the displacement vectors and how to measure their reliability. We then compare computed flow vectors with manually estimated vectors for the progression of a calcium signal recorded from representative astrocyte cultures. Finally, we applied the algorithm to preparations of primary astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in order to illustrate the capability of the algorithm for capturing different types of spatiotemporal calcium activity. We discuss the imaging requirements, parameter selection and threshold selection for reliable measurements, and offer perspectives on uses of the vector data.Comment: 23 pages, 5 figures. Peer reviewed accepted version in press in Annals of Biomedical Engineerin

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    In vivo human retinal and choroidal vasculature visualization using differential phase contrast swept source optical coherence tomography at 1060 nm

    Get PDF
    A differential phase contrast (DPC) method is validated for in vivo human retinal and choroidal vasculature visualization using high-speed swept-source optical coherence tomography (SS-OCT) at 1060 nm. The vasculature was identified as regions of motion by creating differential phase variance (DPV) tomograms: multiple B-scans were collected of individual slices through the retina and the variance of the phase differences was calculated. DPV captured the small vessels and the meshwork of capillaries associated with the inner retina in en face images over 4 mm^2 in a normal subject. En face DPV images were capable of capturing the microvasculature and regions of motion through the inner retina and choroid

    Markerless Motion Capture in the Crowd

    Full text link
    This work uses crowdsourcing to obtain motion capture data from video recordings. The data is obtained by information workers who click repeatedly to indicate body configurations in the frames of a video, resulting in a model of 2D structure over time. We discuss techniques to optimize the tracking task and strategies for maximizing accuracy and efficiency. We show visualizations of a variety of motions captured with our pipeline then apply reconstruction techniques to derive 3D structure.Comment: Presented at Collective Intelligence conference, 2012 (arXiv:1204.2991

    Differential intensity contrast swept source optical coherence tomography for human retinal vasculature visualization

    Get PDF
    We demonstrate an intensity-based motion sensitive method, called differential logarithmic intensity variance (DLOGIV), for 3D microvasculature imaging and foveal avascular zone (FAZ) visualization in the in vivo human retina using swept source optical coherence tomog. (SS-OCT) at 1060 nm. A motion sensitive SS-OCT system was developed operating at 50,000 A-lines/s with 5.9 μm axial resoln., and used to collect 3D images over 4 mm^2 in a normal subject eye. Multiple B-scans were acquired at each individual slice through the retina and the variance of differences of logarithmic intensities as well as the differential phase variances (DPV) was calcd. to identify regions of motion (microvasculature). En face DLOGIV image were capable of capturing the microvasculature through depth with an equal performance compared to the DPV

    Multi-party Interaction in a Virtual Meeting Room

    Get PDF
    This paper presents an overview of the work carried out at the HMI group of the University of Twente in the domain of multi-party interaction. The process from automatic observations of behavioral aspects through interpretations resulting in recognized behavior is discussed for various modalities and levels. We show how a virtual meeting room can be used for visualization and evaluation of behavioral models as well as a research tool for studying the effect of modified stimuli on the perception of behavior

    Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance

    Get PDF
    Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
    corecore