2,532 research outputs found
Left-invariant evolutions of wavelet transforms on the Similitude Group
Enhancement of multiple-scale elongated structures in noisy image data is
relevant for many biomedical applications but commonly used PDE-based
enhancement techniques often fail at crossings in an image. To get an overview
of how an image is composed of local multiple-scale elongated structures we
construct a multiple scale orientation score, which is a continuous wavelet
transform on the similitude group, SIM(2). Our unitary transform maps the space
of images onto a reproducing kernel space defined on SIM(2), allowing us to
robustly relate Euclidean (and scaling) invariant operators on images to
left-invariant operators on the corresponding continuous wavelet transform.
Rather than often used wavelet (soft-)thresholding techniques, we employ the
group structure in the wavelet domain to arrive at left-invariant evolutions
and flows (diffusion), for contextual crossing preserving enhancement of
multiple scale elongated structures in noisy images. We present experiments
that display benefits of our work compared to recent PDE techniques acting
directly on the images and to our previous work on left-invariant diffusions on
orientation scores defined on Euclidean motion group.Comment: 40 page
Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution
We propose two strategies to improve the quality of tractography results
computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both
methods are based on the same PDE framework, defined in the coupled space of
positions and orientations, associated with a stochastic process describing the
enhancement of elongated structures while preserving crossing structures. In
the first method we use the enhancement PDE for contextual regularization of a
fiber orientation distribution (FOD) that is obtained on individual voxels from
high angular resolution diffusion imaging (HARDI) data via constrained
spherical deconvolution (CSD). Thereby we improve the FOD as input for
subsequent tractography. Secondly, we introduce the fiber to bundle coherence
(FBC), a measure for quantification of fiber alignment. The FBC is computed
from a tractography result using the same PDE framework and provides a
criterion for removing the spurious fibers. We validate the proposed
combination of CSD and enhancement on phantom data and on human data, acquired
with different scanning protocols. On the phantom data we find that PDE
enhancements improve both local metrics and global metrics of tractography
results, compared to CSD without enhancements. On the human data we show that
the enhancements allow for a better reconstruction of crossing fiber bundles
and they reduce the variability of the tractography output with respect to the
acquisition parameters. Finally, we show that both the enhancement of the FODs
and the use of the FBC measure on the tractography improve the stability with
respect to different stochastic realizations of probabilistic tractography.
This is shown in a clinical application: the reconstruction of the optic
radiation for epilepsy surgery planning
A survey on fiber nonlinearity compensation for 400 Gbps and beyond optical communication systems
Optical communication systems represent the backbone of modern communication
networks. Since their deployment, different fiber technologies have been used
to deal with optical fiber impairments such as dispersion-shifted fibers and
dispersion-compensation fibers. In recent years, thanks to the introduction of
coherent detection based systems, fiber impairments can be mitigated using
digital signal processing (DSP) algorithms. Coherent systems are used in the
current 100 Gbps wavelength-division multiplexing (WDM) standard technology.
They allow the increase of spectral efficiency by using multi-level modulation
formats, and are combined with DSP techniques to combat the linear fiber
distortions. In addition to linear impairments, the next generation 400 Gbps/1
Tbps WDM systems are also more affected by the fiber nonlinearity due to the
Kerr effect. At high input power, the fiber nonlinear effects become more
important and their compensation is required to improve the transmission
performance. Several approaches have been proposed to deal with the fiber
nonlinearity. In this paper, after a brief description of the Kerr-induced
nonlinear effects, a survey on the fiber nonlinearity compensation (NLC)
techniques is provided. We focus on the well-known NLC techniques and discuss
their performance, as well as their implementation and complexity. An extension
of the inter-subcarrier nonlinear interference canceler approach is also
proposed. A performance evaluation of the well-known NLC techniques and the
proposed approach is provided in the context of Nyquist and super-Nyquist
superchannel systems.Comment: Accepted in the IEEE Communications Surveys and Tutorial
Field Trial of a Flexible Real-time Software-defined GPU-based Optical Receiver
We introduce a flexible, software-defined real-time multi-modulation format
receiver implemented on an off-the-shelf general-purpose graphics processing
unit (GPU). The flexible receiver is able to process 2 GBaud 2-, 4-, 8-, and
16-ary pulse-amplitude modulation (PAM) signals as well as 1 GBaud 4-, 16- and
64-ary quadrature amplitude modulation (QAM) signals, with the latter detected
using a Kramers-Kronig (KK) coherent receiver. Experimental performance
evaluation is shown for back-to-back. In addition, by using the JGN high speed
R&D network testbed, performance is evaluated after transmission over 91 km
field-deployed optical fiber and reconfigurable optical add-drop multiplexers
(ROADMs).Comment: Accepted for publication at Journal of Lightwave Technology, already
available via JLT Early Access, see supplied DOI. This v2 version of the
article is improved w.r.t. v1 after JLT peer-review. This article is a longer
journal version of the conference paper: S.P. van der Heide, et al.,
Real-time, Software-Defined, GPU-Based Receiver Field Trial, ECOC 2020 paper
We1E5, also via arXiv:2010.1433
High-ISO long-exposure image denoising based on quantitative blob characterization
Blob detection and image denoising are fundamental, sometimes related tasks in computer vision. In this paper, we present a computational method to quantitatively measure blob characteristics using normalized unilateral second-order Gaussian kernels. This method suppresses non-blob structures while yielding a quantitative measurement of the position, prominence and scale of blobs, which can facilitate the tasks of blob reconstruction and blob reduction. Subsequently, we propose a denoising scheme to address high-ISO long-exposure noise, which sometimes spatially shows a blob appearance, employing a blob reduction procedure as a cheap preprocessing for conventional denoising methods. We apply the proposed denoising methods to real-world noisy images as well as standard images that are corrupted by real noise. The experimental results demonstrate the superiority of the proposed methods over state-of-the-art denoising methods
A Simplified Crossing Fiber Model in Diffusion Weighted Imaging
Diffusion MRI (dMRI) is a vital source of imaging data for identifying anatomical connections in the living human brain that form the substrate for information transfer between brain regions. dMRI can thus play a central role toward our understanding of brain function. The quantitative modeling and analysis of dMRI data deduces the features of neural fibers at the voxel level, such as direction and density. The modeling methods that have been developed range from deterministic to probabilistic approaches. Currently, the Ball-and-Stick model serves as a widely implemented probabilistic approach in the tractography toolbox of the popular FSL software package and FreeSurfer/TRACULA software package. However, estimation of the features of neural fibers is complex under the scenario of two crossing neural fibers, which occurs in a sizeable proportion of voxels within the brain. A Bayesian non-linear regression is adopted, comprised of a mixture of multiple non-linear components. Such models can pose a difficult statistical estimation problem computationally. To make the approach of Ball-and-Stick model more feasible and accurate, we propose a simplified version of Ball-and-Stick model that reduces parameter space dimensionality. This simplified model is vastly more efficient in the terms of computation time required in estimating parameters pertaining to two crossing neural fibers through Bayesian simulation approaches. Moreover, the performance of this new model is comparable or better in terms of bias and estimation variance as compared to existing models
- …