21 research outputs found

    Estimates for the spectral condition number of cardinal B-spline collocation matrices

    Get PDF
    The famous de Boor conjecture states that the condition of the polynomial B-spline collocation matrix at the knot averages is bounded independently of the knot sequence, i.e., it depends only on the spline degree. For highly nonuniform knot meshes, like geometric meshes, the conjecture is known to be false. As an effort towards finding an answer for uniform meshes, we investigate the spectral condition number of cardinal B-spline collocation matrices. Numerical testing strongly suggests that the conjecture is true for cardinal B-splines

    A three domain covariance framework for EEG/MEG data

    Full text link
    In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.Comment: 25 pages, 8 figures, 1 tabl

    Compressed Sensing and ΣΔ-Quantization

    Get PDF

    A System Centric View of Modern Structured and Sparse Inference Tasks

    Get PDF
    University of Minnesota Ph.D. dissertation.June 2017. Major: Electrical/Computer Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); xii, 140 pages.We are living in the era of data deluge wherein we are collecting unprecedented amount of data from variety of sources. Modern inference tasks are centered around exploiting structure and sparsity in the data to extract relevant information. This thesis takes an end-to-end system centric view of these inference tasks which mainly consist of two sub-parts (i) data acquisition and (ii) data processing. In context of the data acquisition part of the system, we address issues pertaining to noise, clutter (the unwanted extraneous signals which accompany the desired signal), quantization, and missing observations. In the data processing part of the system we investigate the problems that arise in resource-constrained scenarios such as limited computational power and limited battery life. The first part of this thesis is centered around computationally-efficient approximations of a given linear dimensionality reduction (LDR) operator. In particular, we explore the partial circulant matrix (a matrix whose rows are related by circular shifts) based approximations as they allow for computationally-efficient implementations. We present several theoretical results that provide insight into existence of such approximations. We also propose a data-driven approach to numerically obtain such approximations and demonstrate the utility on real-life data. The second part of this thesis is focused around the issues of noise, missing observations, and quantization arising in matrix and tensor data. In particular, we propose a sparsity regularized maximum likelihood approach to completion of matrices following sparse factor models (matrices which can be expressed as a product of two matrices one of which is sparse). We provide general theoretical error bounds for the proposed approach which can be instantiated for variety of noise distributions. We also consider the problem of tensor completion and extend the results of matrix completion to the tensor setting. The problem of matrix completion from quantized and noisy observations is also investigated in as general terms as possible. We propose a constrained maximum likelihood approach to quantized matrix completion, provide probabilistic error bounds for this approach, and numerical algorithms which are used to provide numerical evidence for the proposed error bounds. The final part of this thesis is focused on issues related to clutter and limited battery life in signal acquisition. Specifically, we investigate the problem of compressive measurement design under a given sensing energy budget for estimating structured signals in structured clutter. We propose a novel approach that leverages the prior information about signal and clutter to judiciously allocate sensing energy to the compressive measurements. We also investigate the problem of processing Electrodermal Activity (EDA) signals recorded as the conductance over a user's skin. EDA signals contain information about the user's neuron ring and psychological state. These signals contain the desired information carrying signal superimposed with unwanted components which may be considered as clutter. We propose a novel compressed sensing based approach with provable error guarantees for processing EDA signals to extract relevant information, and demonstrate its efficacy, as compared to existing techniques, via numerical experiments

    On Coherence and the Geometry of Certain Families of Lattices

    Get PDF
    The coherence of a lattice is, roughly speaking, a measure of non-orthogonality of its minimal vectors. It was introduced to lattices (by analogy with frame theory) by L. Fukshansky and others as a possible route to gaining insight into packing density, a central problem in lattice theory. In this work, we introduce the related measure of average coherence, explore connections between packing density and coherence, and prove several properties of certain families of lattices, most notably nearly orthogonal lattices, cyclotomic lattices, and cyclic lattices

    Algorithmic advances in learning from large dimensional matrices and scientific data

    Get PDF
    University of Minnesota Ph.D. dissertation.May 2018. Major: Computer Science. Advisor: Yousef Saad. 1 computer file (PDF); xi, 196 pages.This thesis is devoted to answering a range of questions in machine learning and data analysis related to large dimensional matrices and scientific data. Two key research objectives connect the different parts of the thesis: (a) development of fast, efficient, and scalable algorithms for machine learning which handle large matrices and high dimensional data; and (b) design of learning algorithms for scientific data applications. The work combines ideas from multiple, often non-traditional, fields leading to new algorithms, new theory, and new insights in different applications. The first of the three parts of this thesis explores numerical linear algebra tools to develop efficient algorithms for machine learning with reduced computation cost and improved scalability. Here, we first develop inexpensive algorithms combining various ideas from linear algebra and approximation theory for matrix spectrum related problems such as numerical rank estimation, matrix function trace estimation including log-determinants, Schatten norms, and other spectral sums. We also propose a new method which simultaneously estimates the dimension of the dominant subspace of covariance matrices and obtains an approximation to the subspace. Next, we consider matrix approximation problems such as low rank approximation, column subset selection, and graph sparsification. We present a new approach based on multilevel coarsening to compute these approximations for large sparse matrices and graphs. Lastly, on the linear algebra front, we devise a novel algorithm based on rank shrinkage for the dictionary learning problem, learning a small set of dictionary columns which best represent the given data. The second part of this thesis focuses on exploring novel non-traditional applications of information theory and codes, particularly in solving problems related to machine learning and high dimensional data analysis. Here, we first propose new matrix sketching methods using codes for obtaining low rank approximations of matrices and solving least squares regression problems. Next, we demonstrate that codewords from certain coding scheme perform exceptionally well for the group testing problem. Lastly, we present a novel machine learning application for coding theory, that of solving large scale multilabel classification problems. We propose a new algorithm for multilabel classification which is based on group testing and codes. The algorithm has a simple inexpensive prediction method, and the error correction capabilities of codes are exploited for the first time to correct prediction errors. The third part of the thesis focuses on devising robust and stable learning algorithms, which yield results that are interpretable from specific scientific application viewpoint. We present Union of Intersections (UoI), a flexible, modular, and scalable framework for statistical-machine learning problems. We then adapt this framework to develop new algorithms for matrix decomposition problems such as nonnegative matrix factorization (NMF) and CUR decomposition. We apply these new methods to data from Neuroscience applications in order to obtain insights into the functionality of the brain. Finally, we consider the application of material informatics, learning from materials data. Here, we deploy regression techniques on materials data to predict physical properties of materials

    A Link to the Math. Connections Between Number Theory and Other Mathematical Topics

    Get PDF
    Number theory is one of the oldest mathematical areas. This is perhaps one of the reasons why there are many connections between number theory and other areas inside mathematics. This thesis is devoted to some of those connections. In the first part of this thesis I describe known connections between number theory and twelve other areas, namely analysis, sequences, applied mathematics (i.e., probability theory and numerical mathematics), topology, graph theory, linear algebra, geometry, algebra, differential geometry, complex analysis, physics and computer science, and algebraic geometry. We will see that the concepts will not only connect number theory with these areas but also yield connections among themselves. In the second part I present some new results in four topics connecting number theory with computer science, graph theory, algebra, and linear algebra and analysis, respectively. [...] In the next topic I determine the neighbourhood of the neighourhood of vertices in some special graphs. This problem can be formulated with generators of subgroups in abelian groups and is a direct generalization of a corresponding result for cyclic groups. In the third chapter I determine the number of solutions of some linear equations over factor rings of principal ideal domains R. In the case R = Z this can be used to bound sums appearing in the circle method. Lastly I investigate the puzzle “Lights Out” as well as variants of it. Of special interest is the question of complete solvability, i.e., those cases in which all starting boards are solvable. I will use various number theoretical tools to give a criterion for complete solvability depending on the board size modulo 30 and show how this puzzle relates to algebraic number theory
    corecore