67,178 research outputs found
Wavelet-based denoising for 3D OCT images
Optical coherence tomography produces high resolution medical images based on spatial and temporal coherence of the optical waves backscattered from the scanned tissue. However, the same coherence introduces speckle noise as well; this degrades the quality of acquired images.
In this paper we propose a technique for noise reduction of 3D OCT images, where the 3D volume is considered as a sequence of 2D images, i.e., 2D slices in depth-lateral projection plane. In the proposed method we first perform recursive temporal filtering through the estimated motion trajectory between the 2D slices using noise-robust motion estimation/compensation scheme previously proposed for video denoising. The temporal filtering scheme reduces the noise level and adapts the motion compensation on it. Subsequently, we apply a spatial filter for speckle reduction in order to remove the remainder of noise in the 2D slices. In this scheme the spatial (2D) speckle-nature of noise in OCT is modeled and used for spatially adaptive denoising. Both the temporal and the spatial filter are wavelet-based techniques, where for the temporal filter two resolution scales are used and for the spatial one four resolution scales.
The evaluation of the proposed denoising approach is done on demodulated 3D OCT images on different sources and of different resolution. For optimizing the parameters for best denoising performance fantom OCT images were used. The denoising performance of the proposed method was measured in terms of SNR, edge sharpness preservation and contrast-to-noise ratio. A comparison was made to the state-of-the-art methods for noise reduction in 2D OCT images, where the proposed approach showed to be advantageous in terms of both objective and subjective quality measures
Deep Video Generation, Prediction and Completion of Human Action Sequences
Current deep learning results on video generation are limited while there are
only a few first results on video prediction and no relevant significant
results on video completion. This is due to the severe ill-posedness inherent
in these three problems. In this paper, we focus on human action videos, and
propose a general, two-stage deep framework to generate human action videos
with no constraints or arbitrary number of constraints, which uniformly address
the three problems: video generation given no input frames, video prediction
given the first few frames, and video completion given the first and last
frames. To make the problem tractable, in the first stage we train a deep
generative model that generates a human pose sequence from random noise. In the
second stage, a skeleton-to-image network is trained, which is used to generate
a human action video given the complete human pose sequence generated in the
first stage. By introducing the two-stage strategy, we sidestep the original
ill-posed problems while producing for the first time high-quality video
generation/prediction/completion results of much longer duration. We present
quantitative and qualitative evaluation to show that our two-stage approach
outperforms state-of-the-art methods in video generation, prediction and video
completion. Our video result demonstration can be viewed at
https://iamacewhite.github.io/supp/index.htmlComment: Under review for CVPR 2018. Haoye and Chunyan have equal contributio
- …