2,638 research outputs found

    Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    Get PDF
    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the signal-to-noise ratio of CMOS cameras, we have selected the camera which best fits our requirements. We applied a pixel-by-pixel noise correction to minimize the influence of noise in the perfusion images. We can achieve a frame rate of 0.2 fps for a perfusion image of 128×128 pixels (imaged tissue area of 7×7 cm2) if the data is analyzed online. If the analysis of the data is performed offline, we can achieve a frame rate of 26 fps for a duration of 3.9 seconds. By reducing the imaging size to 128×16 pixels, this frame rate can be achieved for up to half a minute. We show the fast imaging capabilities of the system in order of increasing perfusion frame rate. First the increase of skin perfusion after application of capsicum cream, and the perfusion during an occlusion-reperfusion procedure at the fastest frame rate allowed with online analysis is shown. With the highest frame rate allowed with offline analysis, the skin perfusion revealing the heart beat and the perfusion during an occlusion-reperfusion procedure is presented. Hence we have achieved video rate laser Doppler perfusion imaging

    Delayed inhibition of an anticipatory action during motion extrapolation

    Get PDF
    Background: Continuous visual information is important for movement initiation in a variety of motor tasks. However, even in the absence of visual information people are able to initiate their responses by using motion extrapolation processes. Initiation of actions based on these cognitive processes, however, can demand more attentional resources than that required in situations in which visual information is uninterrupted. In the experiment reported we sought to determine whether the absence of visual information would affect the latency to inhibit an anticipatory action. Methods: The participants performed an anticipatory timing task where they were instructed to move in synchrony with the arrival of a moving object at a determined contact point. On 50% of the trials, a stop sign appeared on the screen and it served as a signal for the participants to halt their movements. They performed the anticipatory task under two different viewing conditions: Full-View (uninterrupted) and Occluded-View (occlusion of the last 500 ms prior to the arrival at the contact point). Results: The results indicated that the absence of visual information prolonged the latency to suppress the anticipatory movement. Conclusion: We suggest that the absence of visual information requires additional cortical processing that creates competing demand for neural resources. Reduced neural resources potentially causes increased reaction time to the inhibitory input or increased time estimation variability, which in combination would account for prolonged latency

    Video Frame Interpolation via Adaptive Separable Convolution

    Get PDF
    Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent approaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv

    Seeing Tree Structure from Vibration

    Full text link
    Humans recognize object structure from both their appearance and motion; often, motion helps to resolve ambiguities in object structure that arise when we observe object appearance only. There are particular scenarios, however, where neither appearance nor spatial-temporal motion signals are informative: occluding twigs may look connected and have almost identical movements, though they belong to different, possibly disconnected branches. We propose to tackle this problem through spectrum analysis of motion signals, because vibrations of disconnected branches, though visually similar, often have distinctive natural frequencies. We propose a novel formulation of tree structure based on a physics-based link model, and validate its effectiveness by theoretical analysis, numerical simulation, and empirical experiments. With this formulation, we use nonparametric Bayesian inference to reconstruct tree structure from both spectral vibration signals and appearance cues. Our model performs well in recognizing hierarchical tree structure from real-world videos of trees and vessels.Comment: ECCV 2018. The first two authors contributed equally to this work. Project page: http://tree.csail.mit.edu
    corecore