270 research outputs found

    A Convex Semi-Definite Positive Framework for DTI Estimation and Regularization

    No full text
    International audienceIn this paper we introduce a novel variational method for joint estimation and regularization of diffusion tensor fields from noisy raw data. To this end, we use the classic quadratic data fidelity term derived from the Stejskal-Tanner equation with a new smoothness term leading to a convex objective function. The regularization term is based on the assumption that the signal can be reconstructed using a weighted average of observations on a local neighborhood. The weights measure the similarity between tensors and are computed directly from the diffusion images. We preserve the positive semi-definiteness constraint using a projected gradient descent. Experimental validation and comparisons with a similar method using synthetic data with known noise model, as well as classification of tensors towards understanding the anatomy of human skeletal muscle demonstrate the potential of our method

    Median and related local filters for tensor-valued images

    Get PDF
    We develop a concept for the median filtering of tensor data. The main part of this concept is the definition of median for symmetric matrices. This definition is based on the minimisation of a geometrically motivated objective function which measures the sum of distances of a variable matrix to the given data matrices. This theoretically wellfounded concept fits into a context of similarly defined median filters for other multivariate data. Unlike some other approaches, we do not require by definition that the median has to be one of the given data values. Nevertheless, it happens so in many cases, equipping the matrix-valued median even with root signals similar to the scalar-valued situation. Like their scalar-valued counterparts, matrix-valued median filters show excellent capabilities for structure-preserving denoising. Experiments on diffusion tensor imaging, fluid dynamics and orientation estimation data are shown to demonstrate this. The orientation estimation examples give rise to a new variant of a robust adaptive structure tensor which can be compared to existing concepts. For the efficient computation of matrix medians, we present a convex programming framework. By generalising the idea of the matrix median filters, we design a variety of other local matrix filters. These include matrix-valued mid-range filters and, more generally, M-smoothers but also weighted medians and \alpha-quantiles. Mid-range filters and quantiles allow also interesting cross-links to fundamental concepts of matrix morphology

    Hypothesis Testing For Network Data in Functional Neuroimaging

    Get PDF
    In recent years, it has become common practice in neuroscience to use networks to summarize relational information in a set of measurements, typically assumed to be reflective of either functional or structural relationships between regions of interest in the brain. One of the most basic tasks of interest in the analysis of such data is the testing of hypotheses, in answer to questions such as "Is there a difference between the networks of these two groups of subjects?" In the classical setting, where the unit of interest is a scalar or a vector, such questions are answered through the use of familiar two-sample testing strategies. Networks, however, are not Euclidean objects, and hence classical methods do not directly apply. We address this challenge by drawing on concepts and techniques from geometry, and high-dimensional statistical inference. Our work is based on a precise geometric characterization of the space of graph Laplacian matrices and a nonparametric notion of averaging due to Fr\'echet. We motivate and illustrate our resulting methodologies for testing in the context of networks derived from functional neuroimaging data on human subjects from the 1000 Functional Connectomes Project. In particular, we show that this global test is more statistical powerful, than a mass-univariate approach. In addition, we have also provided a method for visualizing the individual contribution of each edge to the overall test statistic.Comment: 34 pages. 5 figure

    Total variation regularization for manifold-valued data

    Full text link
    We consider total variation minimization for manifold valued data. We propose a cyclic proximal point algorithm and a parallel proximal point algorithm to minimize TV functionals with â„“p\ell^p-type data terms in the manifold case. These algorithms are based on iterative geodesic averaging which makes them easily applicable to a large class of data manifolds. As an application, we consider denoising images which take their values in a manifold. We apply our algorithms to diffusion tensor images, interferometric SAR images as well as sphere and cylinder valued images. For the class of Cartan-Hadamard manifolds (which includes the data space in diffusion tensor imaging) we show the convergence of the proposed TV minimizing algorithms to a global minimizer

    Learning Sparse Adversarial Dictionaries For Multi-Class Audio Classification

    Full text link
    Audio events are quite often overlapping in nature, and more prone to noise than visual signals. There has been increasing evidence for the superior performance of representations learned using sparse dictionaries for applications like audio denoising and speech enhancement. This paper concentrates on modifying the traditional reconstructive dictionary learning algorithms, by incorporating a discriminative term into the objective function in order to learn class-specific adversarial dictionaries that are good at representing samples of their own class at the same time poor at representing samples belonging to any other class. We quantitatively demonstrate the effectiveness of our learned dictionaries as a stand-alone solution for both binary as well as multi-class audio classification problems.Comment: Accepted in Asian Conference of Pattern Recognition (ACPR-2017

    Total Generalized Variation for Manifold-valued Data

    Full text link
    In this paper we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data in a discrete setting. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present two possible concrete instances that fulfill the proposed axioms. We provide well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data

    A Tractable Online Learning Algorithm for the Multinomial Logit Contextual Bandit

    Full text link
    In this paper, we consider the contextual variant of the MNL-Bandit problem. More specifically, we consider a dynamic set optimization problem, where a decision-maker offers a subset (assortment) of products to a consumer and observes their response in every round. Consumers purchase products to maximize their utility. We assume that a set of attributes describes the products, and the mean utility of a product is linear in the values of these attributes. We model consumer choice behavior using the widely used Multinomial Logit (MNL) model and consider the decision maker problem of dynamically learning the model parameters while optimizing cumulative revenue over the selling horizon TT. Though this problem has attracted considerable attention in recent times, many existing methods often involve solving an intractable non-convex optimization problem. Their theoretical performance guarantees depend on a problem-dependent parameter which could be prohibitively large. In particular, existing algorithms for this problem have regret bounded by O(ÎşdT)O(\sqrt{\kappa d T}), where Îş\kappa is a problem-dependent constant that can have an exponential dependency on the number of attributes. In this paper, we propose an optimistic algorithm and show that the regret is bounded by O(dT+Îş)O(\sqrt{dT} + \kappa), significantly improving the performance over existing methods. Further, we propose a convex relaxation of the optimization step, which allows for tractable decision-making while retaining the favourable regret guarantee.Comment: updated version, under revie
    • …
    corecore