821 research outputs found

    Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping

    Get PDF
    Quantitative magnetic resonance imaging (qMRI) derives tissue-specific parameters -- such as the apparent transverse relaxation rate R2*, the longitudinal relaxation rate R1 and the magnetisation transfer saturation -- that can be compared across sites and scanners and carry important information about the underlying microstructure. The multi-parameter mapping (MPM) protocol takes advantage of multi-echo acquisitions with variable flip angles to extract these parameters in a clinically acceptable scan time. In this context, ESTATICS performs a joint loglinear fit of multiple echo series to extract R2* and multiple extrapolated intercepts, thereby improving robustness to motion and decreasing the variance of the estimators. In this paper, we extend this model in two ways: (1) by introducing a joint total variation (JTV) prior on the intercepts and decay, and (2) by deriving a nonlinear maximum \emph{a posteriori} estimate. We evaluated the proposed algorithm by predicting left-out echoes in a rich single-subject dataset. In this validation, we outperformed other state-of-the-art methods and additionally showed that the proposed approach greatly reduces the variance of the estimated maps, without introducing bias.Comment: 11 pages, 2 figures, 1 table, conference paper, accepted at MICCAI 202

    Combined MIMO adaptive and decentralized controllers for broadband active noise and vibration control

    Get PDF
    Recent implementations of multiple-input multiple-output adaptive controllers for reduction of broadband noise and vibrations provide considerably improved performance over traditional adaptive algorithms. The most significant performance improvements are in terms of speed of convergence, the \ud amount of reduction, and stability of the algorithm. Nevertheless, if the error in the model of the relevant transfer functions becomes too large then the system may become unstable or lose performance. On-line adaptation of the model is possible in principle but, for rapid changes in the model, necessitates \ud a large amount of additional noise to be injected in the system. It has been known for decades that a combination of high-authority control (HAC) and low-authority control (LAC) could lead to improvements with respect to parametric uncertainties and unmodeled dynamics. In this paper a full digital implementation of such a control system is presented in which the HAC (adaptive MIMO control) is implemented on a CPU and in which the LAC (decentralized control) is implemented on a high-speed Field Programmable Gate Array. Experimental results are given in which it is demonstrated that the HAC/LAC combination leads to performance advantages in terms of stabilization under parametric uncertainties and reduction of the error signal

    Stochastic Analysis of LMS Algorithm with Delayed Block Coefficient Adaptation

    Full text link
    In high sample-rate applications of the least-mean-square (LMS) adaptive filtering algorithm, pipelining or/and block processing is required. In this paper, a stochastic analysis of the delayed block LMS algorithm is presented. As opposed to earlier work, pipelining and block processing are jointly considered and extensively examined. Different analyses for the steady and transient states to estimate the step-size bound, adaptation accuracy and adaptation speed based on the recursive relation of delayed block excess mean square error (MSE) are presented. The effect of different amounts of pipelining delays and block sizes on the adaptation accuracy and speed of the adaptive filter with different filter taps and speed-ups are studied. It is concluded that for a constant speed-up, a large delay and small block size lead to a slower convergence rate compared to a small delay and large block size with almost the same steady-state MSE. Monte Carlo simulations indicate a fairly good agreement with the proposed estimates for Gaussian inputs.Comment: 13 pages, 8 figure

    A study on adaptive filtering for noise and echo cancellation.

    Get PDF
    The objective of this thesis is to investigate the adaptive filtering technique on the application of noise and echo cancellation. As a relatively new area in Digital Signal Processing (DSP), adaptive filters have gained a lot of popularity in the past several decades due to the advantages that they can deal with time-varying digital system and they do not require a priori knowledge of the statistics of the information to be processed. Adaptive filters have been successfully applied in a great many areas such as communications, speech processing, image processing, and noise/echo cancellation. Since Bernard Widrow and his colleagues introduced adaptive filter in the 1960s, many researchers have been working on noise/echo cancellation by using adaptive filters with different algorithms. Among these algorithms, normalized least mean square (NLMS) provides an efficient and robust approach, in which the model parameters are obtained on the base of mean square error (MSE). The choice of a structure for the adaptive filters also plays an important role on the performance of the algorithm as a whole. For this purpose, two different filter structures: finite impulse response (FIR) filter and infinite impulse response (IIR) filter have been studied. The adaptive processes with two kinds of filter structures and the aforementioned algorithm have been implemented and simulated using Matlab.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .J53. Source: Masters Abstracts International, Volume: 44-01, page: 0472. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    Optimal low-rank approximations of Bayesian linear inverse problems

    Full text link
    In the Bayesian approach to inverse problems, data are often informative, relative to the prior, only on a low-dimensional subspace of the parameter space. Significant computational savings can be achieved by using this subspace to characterize and approximate the posterior distribution of the parameters. We first investigate approximation of the posterior covariance matrix as a low-rank update of the prior covariance matrix. We prove optimality of a particular update, based on the leading eigendirections of the matrix pencil defined by the Hessian of the negative log-likelihood and the prior precision, for a broad class of loss functions. This class includes the F\"{o}rstner metric for symmetric positive definite matrices, as well as the Kullback-Leibler divergence and the Hellinger distance between the associated distributions. We also propose two fast approximations of the posterior mean and prove their optimality with respect to a weighted Bayes risk under squared-error loss. These approximations are deployed in an offline-online manner, where a more costly but data-independent offline calculation is followed by fast online evaluations. As a result, these approximations are particularly useful when repeated posterior mean evaluations are required for multiple data sets. We demonstrate our theoretical results with several numerical examples, including high-dimensional X-ray tomography and an inverse heat conduction problem. In both of these examples, the intrinsic low-dimensional structure of the inference problem can be exploited while producing results that are essentially indistinguishable from solutions computed in the full space

    Efficient distributed approach for density-based topology optimization using coarsening and h-refinement

    Get PDF
    This work presents an efficient parallel implementation of density-based topology optimization using Adaptive Mesh Refinement (AMR) schemes to reduce the computational burden of the bottleneck of the process, the evaluation of the objective function using Finite Element Analysis (FEA). The objective is to obtain an equivalent design to the one generated on a uniformly fine mesh using distributed memory computing but at a much cheaper computational cost. We propose using a fine mesh for the optimization and a coarse mesh for the analysis using coarsening and refinement criteria based on the thresholding of design variables. We evaluate the functional on the coarse mesh using a distributed conjugate gradient solver preconditioned by an algebraic multigrid (AMG) method showing its computational advantages in some cases by comparing with geometric multigrid (GMG) and AMG methods in two- and three-dimensional problems. We use different computational resources with small regularization distances for such comparisons. We also evaluate the performance and scalability of the proposal using a different number of computing cores and distributed computing hosts. The numerical results show a significant increment of the computing performance for the overall computing time of the proposal combining dynamic coarsening, adaptive mesh refinement, and distributed memory computing architecturesThis work has been supported by the AEI/FEDER and UE under the contract DPI2016-77538-R

    The Krylov-proportionate normalized least mean fourth approach: Formulation and performance analysis

    Get PDF
    Cataloged from PDF version of article.We propose novel adaptive filtering algorithms based on the mean-fourth error objective while providing further improvements on the convergence performance through proportionate update. We exploit the sparsity of the system in the mean-fourth error framework through the proportionate normalized least mean fourth (PNLMF) algorithm. In order to broaden the applicability of the PNLMF algorithm to dispersive (non-sparse) systems, we introduce the Krylov-proportionate normalized least mean fourth (KPNLMF) algorithm using the Krylov subspace projection technique. We propose the Krylov-proportionate normalized least mean mixed norm (KPNLMMN) algorithm combining the mean-square and mean-fourth error objectives in order to enhance the performance of the constituent filters. Additionally, we propose the stable-PNLMF and stable-KPNLMF algorithms overcoming the stability issues induced due to the usage of the mean fourth error framework. Finally, we provide a complete performance analysis, i.e., the transient and the steady-state analyses, for the proportionate update based algorithms, e.g., the PNLMF, the KPNLMF algorithms and their variants; and analyze their tracking performance in a non-stationary environment. Through the numerical examples, we demonstrate the match of the theoretical and ensemble averaged results and show the superior performance of the introduced algorithms in different scenarios. (C) 2014 Elsevier B.V. All rights reserved

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore