137,385 research outputs found
Video Acceleration Magnification
The ability to amplify or reduce subtle image changes over time is useful in
contexts such as video editing, medical video analysis, product quality control
and sports. In these contexts there is often large motion present which
severely distorts current video amplification methods that magnify change
linearly. In this work we propose a method to cope with large motions while
still magnifying small changes. We make the following two observations: i)
large motions are linear on the temporal scale of the small changes; ii) small
changes deviate from this linearity. We ignore linear motion and propose to
magnify acceleration. Our method is pure Eulerian and does not require any
optical flow, temporal alignment or region annotations. We link temporal
second-order derivative filtering to spatial acceleration magnification. We
apply our method to moving objects where we show motion magnification and color
magnification. We provide quantitative as well as qualitative evidence for our
method while comparing to the state-of-the-art.Comment: Accepted paper at CVPR 2017. Project webpage:
http://acceleration-magnification.github.io
In vivo volumetric imaging of human retinal circulation with phase-variance optical coherence tomography
We present in vivo volumetric images of human retinal micro-circulation using Fourier-domain optical coherence tomography (Fd-OCT) with the phase-variance based motion contrast method. Currently fundus fluorescein angiography (FA) is the standard technique in clinical settings for visualizing blood circulation of the retina. High contrast imaging of retinal vasculature is achieved by injection of a fluorescein dye into the systemic circulation. We previously reported phase-variance optical coherence tomography (pvOCT) as an alternative and non-invasive technique to image human retinal capillaries. In contrast to FA, pvOCT allows not only noninvasive visualization of a two-dimensional retinal perfusion map but also volumetric morphology of retinal microvasculature with high sensitivity. In this paper we report high-speed acquisition at 125 kHz A-scans with pvOCT to reduce motion artifacts and increase the scanning area when compared with previous reports. Two scanning schemes with different sampling densities and scanning areas are evaluated to find optimal parameters for high acquisition speed in vivo imaging. In order to evaluate this technique, we compare pvOCT capillary imaging at 3x3 mm^2 and 1.5x1.5 mm^2 with fundus FA for a normal human subject. Additionally, a volumetric view of retinal capillaries and a stitched image acquired with ten 3x3 mm^2 pvOCT sub-volumes are presented. Visualization of retinal vasculature with pvOCT has potential for diagnosis of retinal vascular diseases
Video Frame Interpolation via Adaptive Separable Convolution
Standard video frame interpolation methods first estimate optical flow
between input frames and then synthesize an intermediate frame guided by
motion. Recent approaches merge these two steps into a single convolution
process by convolving input frames with spatially adaptive kernels that account
for motion and re-sampling simultaneously. These methods require large kernels
to handle large motion, which limits the number of pixels whose kernels can be
estimated at once due to the large memory demand. To address this problem, this
paper formulates frame interpolation as local separable convolution over input
frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D
kernels require significantly fewer parameters to be estimated. Our method
develops a deep fully convolutional neural network that takes two input frames
and estimates pairs of 1D kernels for all pixels simultaneously. Since our
method is able to estimate kernels and synthesizes the whole video frame at
once, it allows for the incorporation of perceptual loss to train the neural
network to produce visually pleasing frames. This deep neural network is
trained end-to-end using widely available video data without any human
annotation. Both qualitative and quantitative experiments show that our method
provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv
- …