14,047 research outputs found
Exploiting flow dynamics for super-resolution in contrast-enhanced ultrasound
Ultrasound localization microscopy offers new radiation-free diagnostic tools
for vascular imaging deep within the tissue. Sequential localization of echoes
returned from inert microbubbles with low-concentration within the bloodstream
reveal the vasculature with capillary resolution. Despite its high spatial
resolution, low microbubble concentrations dictate the acquisition of tens of
thousands of images, over the course of several seconds to tens of seconds, to
produce a single super-resolved image. %since each echo is required to be well
separated from adjacent microbubbles. Such long acquisition times and stringent
constraints on microbubble concentration are undesirable in many clinical
scenarios. To address these restrictions, sparsity-based approaches have
recently been developed. These methods reduce the total acquisition time
dramatically, while maintaining good spatial resolution in settings with
considerable microbubble overlap. %Yet, non of the reported methods exploit the
fact that microbubbles actually flow within the bloodstream. % to improve
recovery. Here, we further improve sparsity-based super-resolution ultrasound
imaging by exploiting the inherent flow of microbubbles and utilize their
motion kinematics. While doing so, we also provide quantitative measurements of
microbubble velocities. Our method relies on simultaneous tracking and
super-localization of individual microbubbles in a frame-by-frame manner, and
as such, may be suitable for real-time implementation. We demonstrate the
effectiveness of the proposed approach on both simulations and {\it in-vivo}
contrast enhanced human prostate scans, acquired with a clinically approved
scanner.Comment: 11 pages, 9 figure
Semantically Guided Depth Upsampling
We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral
An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model
In this work, we propose a novel procedure for video super-resolution, that
is the recovery of a sequence of high-resolution images from its low-resolution
counterpart. Our approach is based on a "sequential" model (i.e., each
high-resolution frame is supposed to be a displaced version of the preceding
one) and considers the use of sparsity-enforcing priors. Both the recovery of
the high-resolution images and the motion fields relating them is tackled. This
leads to a large-dimensional, non-convex and non-smooth problem. We propose an
algorithmic framework to address the latter. Our approach relies on fast
gradient evaluation methods and modern optimization techniques for
non-differentiable/non-convex problems. Unlike some other previous works, we
show that there exists a provably-convergent method with a complexity linear in
the problem dimensions. We assess the proposed optimization method on {several
video benchmarks and emphasize its good performance with respect to the state
of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201
Video Frame Interpolation via Adaptive Separable Convolution
Standard video frame interpolation methods first estimate optical flow
between input frames and then synthesize an intermediate frame guided by
motion. Recent approaches merge these two steps into a single convolution
process by convolving input frames with spatially adaptive kernels that account
for motion and re-sampling simultaneously. These methods require large kernels
to handle large motion, which limits the number of pixels whose kernels can be
estimated at once due to the large memory demand. To address this problem, this
paper formulates frame interpolation as local separable convolution over input
frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D
kernels require significantly fewer parameters to be estimated. Our method
develops a deep fully convolutional neural network that takes two input frames
and estimates pairs of 1D kernels for all pixels simultaneously. Since our
method is able to estimate kernels and synthesizes the whole video frame at
once, it allows for the incorporation of perceptual loss to train the neural
network to produce visually pleasing frames. This deep neural network is
trained end-to-end using widely available video data without any human
annotation. Both qualitative and quantitative experiments show that our method
provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv
- …