27 research outputs found

    Flexible methods for blind separation of complex signals

    Get PDF
    One of the main matter in Blind Source Separation (BSS) performed with a neural network approach is the choice of the nonlinear activation function (AF). In fact if the shape of the activation function is chosen as the cumulative density function (c.d.f.) of the original source the problem is solved. For this scope in this thesis a flexible approach is introduced and the shape of the activation functions is changed during the learning process using the so-called “spline functions”. The problem is complicated in the case of separation of complex sources where there is the problem of the dichotomy between analyticity and boundedness of the complex activation functions. The problem is solved introducing the “splitting function” model as activation function. The “splitting function” is a couple of “spline function” which wind off the real and the imaginary part of the complex activation function, each of one depending from the real and imaginary variable. A more realistic model is the “generalized splitting function”, which is formed by a couple of two bi-dimensional functions (surfaces), one for the real and one for the imaginary part of the complex function, each depending by both the real and imaginary part of the complex variable. Unfortunately the linear environment is unrealistic in many practical applications. In this way there is the need of extending BSS problem in the nonlinear environment: in this case both the activation function than the nonlinear distorting function are realized by the “splitting function” made of “spline function”. The complex and instantaneous separation in linear and nonlinear environment allow us to perform a complex-valued extension of the well-known INFOMAX algorithm in several practical situations, such as convolutive mixtures, fMRI signal analysis and bandpass signal transmission. In addition advanced characteristics on the proposed approach are introduced and deeply described. First of all it is shows as splines are universal nonlinear functions for BSS problem: they are able to perform separation in anyway. Then it is analyzed as the “splitting solution” allows the algorithm to obtain a phase recovery: usually there is a phase ambiguity. Finally a CramĂ©r-Rao lower bound for ICA is discussed. Several experimental results, tested by different objective indexes, show the effectiveness of the proposed approaches

    Flexible methods for blind separation of complex signals

    Get PDF
    One of the main matter in Blind Source Separation (BSS) performed with a neural network approach is the choice of the nonlinear activation function (AF). In fact if the shape of the activation function is chosen as the cumulative density function (c.d.f.) of the original source the problem is solved. For this scope in this thesis a flexible approach is introduced and the shape of the activation functions is changed during the learning process using the so-called “spline functions”. The problem is complicated in the case of separation of complex sources where there is the problem of the dichotomy between analyticity and boundedness of the complex activation functions. The problem is solved introducing the “splitting function” model as activation function. The “splitting function” is a couple of “spline function” which wind off the real and the imaginary part of the complex activation function, each of one depending from the real and imaginary variable. A more realistic model is the “generalized splitting function”, which is formed by a couple of two bi-dimensional functions (surfaces), one for the real and one for the imaginary part of the complex function, each depending by both the real and imaginary part of the complex variable. Unfortunately the linear environment is unrealistic in many practical applications. In this way there is the need of extending BSS problem in the nonlinear environment: in this case both the activation function than the nonlinear distorting function are realized by the “splitting function” made of “spline function”. The complex and instantaneous separation in linear and nonlinear environment allow us to perform a complex-valued extension of the well-known INFOMAX algorithm in several practical situations, such as convolutive mixtures, fMRI signal analysis and bandpass signal transmission. In addition advanced characteristics on the proposed approach are introduced and deeply described. First of all it is shows as splines are universal nonlinear functions for BSS problem: they are able to perform separation in anyway. Then it is analyzed as the “splitting solution” allows the algorithm to obtain a phase recovery: usually there is a phase ambiguity. Finally a CramĂ©r-Rao lower bound for ICA is discussed. Several experimental results, tested by different objective indexes, show the effectiveness of the proposed approaches

    Structure Learning in Audio

    Get PDF

    Multiresolution image models and estimation techniques

    Get PDF

    A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio

    No full text
    In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of these algorithms is the heavy computational burden involved in calculating the expression for the conditional expectation. In addition, none of these techniques are applicable for signal-to-noise ratios lower than 7 dB. In this paper, I propose a new closed-form approximated expression for the conditional expectation based on a previously obtained expression where the equalized output probability density function is calculated via the approximated input probability density function which itself is approximated with the maximum entropy density approximation technique. This newly proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation based on the maximum entropy approximation technique. The simulation results indicate that the newly proposed algorithm with the newly proposed Lagrange multipliers is suitable for signal-to-noise ratio values down to 0 dB and has an improved equalization performance from the residual inter-symbol-interference point of view compared to the previously obtained algorithms based on the conditional expectation obtained via the maximum entropy technique

    A Multi-scale Stochastic Filter Based Approach to Inverse Scattering for 3D Ultrasound Soft Tissue Characterization

    Get PDF
    The goal of this research is to achieve accurate characterization of multi-layered soft tissues in three dimensions using focused ultrasound. The characterization of the acoustic parameters of each tissue layer is formulated as recursive processes of forward- and inverse- scattering. Forward scattering deals with the modeling of focused ultrasound wave propagation in multi-layered tissues, and the computation of the focused wave amplitudes in the tissues based on the acoustic parameters of the tissue as generated by inverse scattering. The model for mapping the tissue acoustic parameters to focused waves is highly nonlinear and stochastic. In addition, solving (or inverting) the model to obtain tissue acoustic parameters is an ill-posed problem. Therefore, a nonlinear stochastic inverse scattering method is proposed such that no linearization and mathematical inversion of the model are required. Inverse scattering aims to estimate the tissue acoustic parameters based on the forward scattering model and ultrasound measurements of the tissues. A multi-scale stochastic filter (MSF) is proposed to perform inverse scattering. MSF generates a set of tissue acoustic parameters, which are then mapped into focused wave amplitudes in the multi-layered tissues by forward scattering. The tissue acoustic parameters are weighted by comparing their focused wave amplitudes to the actual ultrasound measurements. The weighted parameters are used to estimate a weighted Gaussian mixture as the posterior probability density function (PDF) of the parameters. This PDF is optimized to achieve minimum estimation error variance in the sense of the posterior Cramer-Rao bound. The optimized posterior PDF is used to produce minimum mean-square-error estimates of the tissue acoustic parameters. As a result, both the estimation error and uncertainty of the parameters are minimized. PDF optimization is formulated based on a novel multi-scale PDF analysis framework. This framework is founded based on exploiting the analogy between PDFs and analog (or digital) signals. PDFs and signals are similar in the sense that they represent characteristics of variables in their respective domains, except that there are constraints imposed on PDFs. Therefore, it is reasonable to consider a PDF as a signal that is subject to amplitude constraints, and as such apply signal processing techniques to analyze the PDF. The multi-scale PDF analysis framework is proposed to recursively decompose an arbitrary PDF from its fine to coarse scales. The recursive decompositions are designed so as to ensure that requirements such as PDF constraints, zero-phase shift and non-creation of artifacts are satisfied. The relationship between the PDFs at consecutive scales is derived in order for the PDF optimization process to recursively reconstruct the posterior PDF from its coarse to fine scales. At each scale, PDF reconstruction aims to reduce the variances of the posterior PDF Gaussian components, and as a result the confidence in the estimate is increased. The overall posterior PDF variance reduction is guided by the posterior Cramer-Rao bound. A series of experiments is conducted to investigate the performance of the proposed method on ultrasound multi-layered soft tissue characterization. Multi-layered tissue phantoms that emulate ocular components of the eye are fabricated as test subjects. Experimental results confirm that the proposed MSF inverse scattering approach is well suited for three-dimensional ultrasound tissue characterization. In addition, performance comparisons between MSF and a state-of-the-art nonlinear stochastic filter are conducted. Results show that MSF is more accurate and less computational intensive than the state-of-the-art filter

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates
    corecore