26,088 research outputs found

    Bicompletions of distance matrices

    Full text link
    In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in *distance spaces*, which generalize metric spaces, but are slightly less general than Lawvere's *generalized metric spaces*. In this framework, the task of extracting the 'principal components' from a given matrix of data boils down to a bicompletio}, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security.Comment: 20 pages, 5 figures; appeared in Springer LNCS vol 7860 in 2013; v2 fixes an error in Sec. 2.3, noticed by Toshiki Kataok

    On Matrix KP and Super-KP Hierarchies in the Homogeneous Grading

    Get PDF
    Constrained KP and super-KP hierarchies of integrable equations (generalized NLS hierarchies) are systematically produced through a Lie algebraic AKS-matrix framework associated to the homogeneous grading. The role played by different regular elements to define the corresponding hierarchies is analyzed as well as the symmetry properties under the Weyl group transformations. The coset structure of higher order hamiltonian densities is proven.\par For a generic Lie algebra the hierarchies here considered are integrable and essentially dependent on continuous free parameters. The bosonic hierarchies studied in \cite{{FK},{AGZ}} are obtained as special limit restrictions on hermitian symmetric-spaces.\par In the supersymmetric case the homogeneous grading is introduced consistently by using alternating sums of bosons and fermions in the spectral parameter power series.\par The bosonic hierarchies obtained from sl(3)^{\hat {sl(3)}} and the supersymmetric ones derived from the N=1N=1 affinization of sl(2)sl(2), sl(3)sl(3) and osp(12)osp(1|2) are explicitly constructed. \par An unexpected result is found: only a restricted subclass of the sl(3)sl(3) bosonic hierarchies can be supersymmetrically extended while preserving integrability.Comment: 36 pages, LaTe

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    On Weighted Multivariate Sign Functions

    Full text link
    Multivariate sign functions are often used for robust estimation and inference. We propose using data dependent weights in association with such functions. The proposed weighted sign functions retain desirable robustness properties, while significantly improving efficiency in estimation and inference compared to unweighted multivariate sign-based methods. Using weighted signs, we demonstrate methods of robust location estimation and robust principal component analysis. We extend the scope of using robust multivariate methods to include robust sufficient dimension reduction and functional outlier detection. Several numerical studies and real data applications demonstrate the efficacy of the proposed methodology.Comment: Keywords: Multivariate sign, Principal component analysis, Data depth, Sufficient dimension reductio

    Symmetric tensor decomposition

    Get PDF
    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (Waring's problem), incidence properties on secant varieties of the Veronese Variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester's approach from the dual point of view. Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with these Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions, and for detecting the rank
    corecore