1,270 research outputs found
High frame-rate cardiac ultrasound imaging with deep learning
Cardiac ultrasound imaging requires a high frame rate in order to capture
rapid motion. This can be achieved by multi-line acquisition (MLA), where
several narrow-focused received lines are obtained from each wide-focused
transmitted line. This shortens the acquisition time at the expense of
introducing block artifacts. In this paper, we propose a data-driven
learning-based approach to improve the MLA image quality. We train an
end-to-end convolutional neural network on pairs of real ultrasound cardiac
data, acquired through MLA and the corresponding single-line acquisition (SLA).
The network achieves a significant improvement in image quality for both
and line MLA resulting in a decorrelation measure similar to that of SLA
while having the frame rate of MLA.Comment: To appear in the Proceedings of MICCAI, 201
Recommended from our members
Statistical Region Based Segmentation of Ultrasound Images
Segmentation of ultrasound images is a challenging problem due to speckle, which
corrupts the image and can result in weak or missing image boundaries, poor signal to
noise ratio, and diminished contrast resolution. Speckle is a random interference pattern
that is characterized by an asymmetric distribution as well as significant spatial correla-
tion. These attributes of speckle are challenging to model in a segmentation approach, so
many previous ultrasound segmentation methods simplify the problem by assuming that
the speckle is white and/or Gaussian distributed. Unlike these methods, in this paper
we present an ultrasound-specific segmentation approach that addresses both the spatial
correlation of the data as well as its intensity distribution. We first decorrelate the image
and then apply a region-based active contour whose motion is derived from an appropri-
ate parametric distribution for maximum likelihood image segmentation. We consider
zero-mean complex Gaussian, Rayleigh, and Fisher-Tippett flows, which are designed
to model fully formed speckle in the in-phase/quadrature (IQ), envelope detected, and
display (log compressed) images, respectively. We present experimental results demon-
strating the effectiveness of our method, and compare the results to other parametric
and non-parametric active contours
Noise in optical synthesis images. II. Sensitivity of an ^nC_2 interferometer with bispectrum imaging
We study the imaging sensitivity of a ground-based optical array of n apertures in which the beams are combined pairwise, as in radio-interferometric arrays, onto n(n - 1)/2 detectors, the so-called ^nC_2 interferometer. Groundbased operation forces the use of the fringe power and the bispectrum phasor as the primary observables rather than the simpler and superior observable, the Michelson fringe phasor. At high photon rates we find that bispectral imaging suffers no loss of sensitivity compared with an ideal array (space based) that directly uses the Michelson fringe phasor. In the opposite limit, when the number of photons per spatial coherence area per coherence time drops below unity, the sensitivity of the array drops rapidly relative to an ideal array. In this regime the sensitivity is independent of n, and hence it may be efficient to have many smaller arrays, each operating separately and simultaneously
A new kernel method for hyperspectral image feature extraction
Hyperspectral image provides abundant spectral information for remote discrimination of subtle differences in ground covers. However, the increasing spectral dimensions, as well as the information redundancy, make the analysis and interpretation of hyperspectral images a challenge. Feature extraction is a very important step for hyperspectral image processing. Feature extraction methods aim at reducing the dimension of data, while preserving as much information as possible. Particularly, nonlinear feature extraction methods (e.g. kernel minimum noise fraction (KMNF) transformation) have been reported to benefit many applications of hyperspectral remote sensing, due to their good preservation of high-order structures of the original data. However, conventional KMNF or its extensions have some limitations on noise fraction estimation during the feature extraction, and this leads to poor performances for post-applications. This paper proposes a novel nonlinear feature extraction method for hyperspectral images. Instead of estimating noise fraction by the nearest neighborhood information (within a sliding window), the proposed method explores the use of image segmentation. The approach benefits both noise fraction estimation and information preservation, and enables a significant improvement for classification. Experimental results on two real hyperspectral images demonstrate the efficiency of the proposed method. Compared to conventional KMNF, the improvements of the method on two hyperspectral image classification are 8 and 11%. This nonlinear feature extraction method can be also applied to other disciplines where high-dimensional data analysis is required
Adaptive image noise filtering using transform domain local statistics
Accepted ManuscriptPublishe
Sunyaev-Zel'dovich clusters reconstruction in multiband bolometer camera surveys
We present a new method for the reconstruction of Sunyaev-Zel'dovich (SZ)
galaxy clusters in future SZ-survey experiments using multiband bolometer
cameras such as Olimpo, APEX, or Planck. Our goal is to optimise SZ-Cluster
extraction from our observed noisy maps. We wish to emphasize that none of the
algorithms used in the detection chain is tuned on prior knowledge on the SZ
-Cluster signal, or other astrophysical sources (Optical Spectrum, Noise
Covariance Matrix, or covariance of SZ Cluster wavelet coefficients). First, a
blind separation of the different astrophysical components which contribute to
the observations is conducted using an Independent Component Analysis (ICA)
method. Then, a recent non linear filtering technique in the wavelet domain,
based on multiscale entropy and the False Discovery Rate (FDR) method, is used
to detect and reconstruct the galaxy clusters. Finally, we use the Source
Extractor software to identify the detected clusters. The proposed method was
applied on realistic simulations of observations. As for global detection
efficiency, this new method is impressive as it provides comparable results to
Pierpaoli et al. method being however a blind algorithm. Preprint with full
resolution figures is available at the URL:
w10-dapnia.saclay.cea.fr/Phocea/Vie_des_labos/Ast/ast_visu.php?id_ast=728Comment: Submitted to A&A. 32 Pages, text onl
Real-time Controllable Denoising for Image and Video
Controllable image denoising aims to generate clean samples with human
perceptual priors and balance sharpness and smoothness. In traditional
filter-based denoising methods, this can be easily achieved by adjusting the
filtering strength. However, for NN (Neural Network)-based models, adjusting
the final denoising strength requires performing network inference each time,
making it almost impossible for real-time user interaction. In this paper, we
introduce Real-time Controllable Denoising (RCD), the first deep image and
video denoising pipeline that provides a fully controllable user interface to
edit arbitrary denoising levels in real-time with only one-time network
inference. Unlike existing controllable denoising methods that require multiple
denoisers and training stages, RCD replaces the last output layer (which
usually outputs a single noise map) of an existing CNN-based model with a
lightweight module that outputs multiple noise maps. We propose a novel Noise
Decorrelation process to enforce the orthogonality of the noise feature maps,
allowing arbitrary noise level control through noise map interpolation. This
process is network-free and does not require network inference. Our experiments
show that RCD can enable real-time editable image and video denoising for
various existing heavy-weight models without sacrificing their original
performance.Comment: CVPR 202
- …