8,022 research outputs found
Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization
Spherical deconvolution (SD) methods are widely used to estimate the
intra-voxel white-matter fiber orientations from diffusion MRI data. However,
while some of these methods assume a zero-mean Gaussian distribution for the
underlying noise, its real distribution is known to be non-Gaussian and to
depend on the methodology used to combine multichannel signals. Indeed, the two
prevailing methods for multichannel signal combination lead to Rician and
noncentral Chi noise distributions. Here we develop a Robust and Unbiased
Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with
realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to
Rician and noncentral Chi likelihood models. To quantify the benefits of using
proper noise models, RUMBA-SD was compared with dRL-SD, a well-established
method based on the RL algorithm for Gaussian noise. Another aim of the study
was to quantify the impact of including a total variation (TV) spatial
regularization term in the estimation framework. To do this, we developed TV
spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The
evaluation was performed by comparing various quality metrics on 132
three-dimensional synthetic phantoms involving different inter-fiber angles and
volume fractions, which were contaminated with noise mimicking patterns
generated by data processing in multichannel scanners. The results demonstrate
that the inclusion of proper likelihood models leads to an increased ability to
resolve fiber crossings with smaller inter-fiber angles and to better detect
non-dominant fibers. The inclusion of TV regularization dramatically improved
the resolution power of both techniques. The above findings were also verified
in brain data
Advanced signal processing methods in dynamic contrast enhanced magnetic resonance imaging
Tato dizertační práce představuje metodu zobrazování perfúze magnetickou rezonancí, jež je výkonným nástrojem v diagnostice, především v onkologii. Po ukončení sběru časové sekvence T1-váhovaných obrazů zaznamenávajících distribuci kontrastní látky v těle začíná fáze zpracování dat, která je předmětem této dizertace. Je zde představen teoretický základ fyziologických modelů a modelů akvizice pomocí magnetické rezonance a celý řetězec potřebný k vytvoření obrazů odhadu parametrů perfúze a mikrocirkulace v tkáni. Tato dizertační práce je souborem uveřejněných prací autora přispívajícím k rozvoji metodologie perfúzního zobrazování a zmíněného potřebného teoretického rozboru.This dissertation describes quantitative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI), which is a powerful tool in diagnostics, mainly in oncology. After a time series of T1-weighted images recording contrast-agent distribution in the body has been acquired, data processing phase follows. It is presented step by step in this dissertation. The theoretical background in physiological and MRI-acquisition modeling is described together with the estimation process leading to parametric maps describing perfusion and microcirculation properties of the investigated tissue on a voxel-by-voxel basis. The dissertation is divided into this theoretical analysis and a set of publications representing particular contributions of the author to DCE-MRI.
Efficient calculation of sensor utility and sensor removal in wireless sensor networks for adaptive signal estimation and beamforming
Wireless sensor networks are often deployed over a large area of interest and therefore the quality of the sensor signals may vary significantly across the different sensors. In this case, it is useful to have a measure for the importance or the so-called "utility" of each sensor, e.g., for sensor subset selection, resource allocation or topology selection. In this paper, we consider the efficient calculation of sensor utility measures for four different signal estimation or beamforming algorithms in an adaptive context. We use the definition of sensor utility as the increase in cost (e.g., mean-squared error) when the sensor is removed from the estimation procedure. Since each possible sensor removal corresponds to a new estimation problem (involving less sensors), calculating the sensor utilities would require a continuous updating of different signal estimators (where is the number of sensors), increasing computational complexity and memory usage by a factor. However, we derive formulas to efficiently calculate all sensor utilities with hardly any increase in memory usage and computational complexity compared to the signal estimation algorithm already in place. When applied in adaptive signal estimation algorithms, this allows for on-line tracking of all the sensor utilities at almost no additional cost. Furthermore, we derive efficient formulas for sensor removal, i.e., for updating the signal estimator coefficients when a sensor is removed, e.g., due to a failure in the wireless link or when its utility is too low. We provide a complexity evaluation of the derived formulas, and demonstrate the significant reduction in computational complexity compared to straightforward implementations
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Recommended from our members
Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation
Satellite-based remotely sensed data have the potential to provide hydrologically relevant information about spatially and temporally varying physical variables. A methodology for estimating such variables from multichannel remotely sensed data is presented; the approach is based on a modified counterpropagation neural network (MCPN) and is both effective and efficient at building complex nonlinear input-output function mappings from large amounts of data. An application to high-resolution estimation of the spatial and temporal variation of surface rainfall using geostationary satellite infrared and visible imagery is presented. Test results also indicate that spatially and temporally sparse ground-based observations can be assimilated via an adaptive implementation of the MCPN method, thereby allowing on-line improvement of the estimates
Deep Signal Recovery with One-Bit Quantization
Machine learning, and more specifically deep learning, have shown remarkable
performance in sensing, communications, and inference. In this paper, we
consider the application of the deep unfolding technique in the problem of
signal reconstruction from its one-bit noisy measurements. Namely, we propose a
model-based machine learning method and unfold the iterations of an inference
optimization algorithm into the layers of a deep neural network for one-bit
signal recovery. The resulting network, which we refer to as DeepRec, can
efficiently handle the recovery of high-dimensional signals from acquired
one-bit noisy measurements. The proposed method results in an improvement in
accuracy and computational efficiency with respect to the original framework as
shown through numerical analysis.Comment: This paper has been submitted to the 44th International Conference on
Acoustics, Speech, and Signal Processing (ICASSP 2019
Gamma ray fluorescence for in situ evaluation of ore in Witwatersrand gold mines
A Thesis Submitted to the Faculty of Science
University of the Witwatersrand, Johannesburg
for the Degree of Doctor of Philosophy
Johannesburg 1979A system tor quantitative in situ evaluation of ore in
Witwatersrand gold mines was researched and subsequently
developed.
The principle of measurement is based on the excitation
of gold K x-rays in rock face samples by the 88 keV gamma
radiation from a Cadmium-109 radioisotope source. The X-rays
and scattered radiation from the rock matrix are detected by
a hyperpure germanium detector cooled by liquid nitrogen in
a portable probe. In the fluorescence spectrum the intensity
ratio of the gold Kb peaks to their immediate scattered
background is evaluated and quantitatively converted in the
portable analyser to area concentration units.
All aspects of the physical and instrumental measurement
had to be investigated to arrive at a system capable ot
quantitative evaluation of trace concentrations in stope
face ore samples. The parameters of efficiency of excitation
of the gold K X-rays, and the energy distribution after
scattering from the rock matrix at different angles were
investigated from basic principles to determine an optimum
source - sample - detector g e o m e t r y which would allow
quantitative evaluation of homogeneous ore concentrations,
for edged-on measurement of rough-surfaced thin layer
deposits a method or controlling the measurement geometry
through ratemeter feedback was developed to allow conversion
of mass concentration values to units of area concentration.
The parameters of spectrum evaluation were investigated from
fundamental principles to allow quantitative assessment of
different methods of peak evaluation for optimization of the
method as a whole. The basic concepts of random signal
processing times were developed together with new concepts
of pileup parameters to allow a quantitative description of
the data acquisition rate of a complete analog pulse
processing system.
With this foundation a practical measuring geometry and
optimum values for signal processing time parameters, for
detector size and for discriminator positions for spectrum
evaluation could be determined.
Parallel with the derivation of optimum measurement
parameters went the development of instruments, their field
testing and appraisal of the method. The underground results
obtained with prototype versions of the gamma ray
fluorescence analyser were in all instances found to have a
highly significant correlation with those obtained from the
same locations by conventional chip or bulk sampling and
fire assay.
The development of the gamma ray fluorescence method has
shown the potential of the method to serve as an ore
valuation tool and to assist in the geological identificaion
of strata in Witwatersrand gold mines
- …