38 research outputs found

    Undercomplete Blind Subspace Deconvolution via Linear Prediction

    Get PDF
    We present a novel solution technique for the blind subspace deconvolution (BSSD) problem, where temporal convolution of multidimensional hidden independent components is observed and the task is to uncover the hidden components using the observation only. We carry out this task for the undercomplete case (uBSSD): we reduce the original uBSSD task via linear prediction to independent subspace analysis (ISA), which we can solve. As it has been shown recently, applying temporal concatenation can also reduce uBSSD to ISA, but the associated ISA problem can easily become `high dimensional' [1]. The new reduction method circumvents this dimensionality problem. We perform detailed studies on the efficiency of the proposed technique by means of numerical simulations. We have found several advantages: our method can achieve high quality estimations for smaller number of samples and it can cope with deeper temporal convolutions.Comment: 12 page

    Independent Process Analysis without A Priori Dimensional Information

    Get PDF
    Recently, several algorithms have been proposed for independent subspace analysis where hidden variables are i.i.d. processes. We show that these methods can be extended to certain AR, MA, ARMA and ARIMA tasks. Central to our paper is that we introduce a cascade of algorithms, which aims to solve these tasks without previous knowledge about the number and the dimensions of the hidden processes. Our claim is supported by numerical simulations. As a particular application, we search for subspaces of facial components.Comment: 9 pages, 2 figure

    Complex independent process analysis

    Get PDF
    We present a general framework for the search of hidden independent processes in the complex domain. The task is to estimate the hidden independent multidimensional complex-valued components observing only the mixture of the processes driven by them. In our model (i) the hidden independent processes can be multidimensional, they may be subject to (ii) moving averaging, or may evolve in an autoregressive manner, or (iii) they can be non-stationary. These assumptions are covered by integrated autoregressive moving average processes and thus our task is to solve their complex extensions. We show how to reduce the undercomplete version of complex integrated autoregressive moving average processes to real independent subspace analysis that we can solve. Simulations illustrate the working of the algorithm

    Separation Principles in Independent Process Analysis

    Get PDF

    Neurally Plausible, Non-combinatorial Iterative Independent Process Analysis

    Get PDF
    It has been shown recently that the identification of mixed hidden independent auto-regressive processes (independent process analysis, IPA), under certain conditions, can be free from combinatorial explosion. The key is that IPA can be reduced (i) to independent subspace analysis and then, via a novel decomposition technique called Separation Theorem, (ii) to independent component analysis. Here, we introduce an iterative scheme and its neural network representation that takes advantage of the reduction method and can accomplish the IPA task. Computer simulation illustrates the working of the algorithm

    Group-structured and independent subspace based dictionary learning

    Get PDF
    Thanks to the several successful applications, sparse signal representation has become one of the most actively studied research areas in mathematics. However, in the traditional sparse coding problem the dictionary used for representation is assumed to be known. In spite of the popularity of sparsity and its recently emerged structured sparse extension, interestingly, very few works focused on the learning problem of dictionaries to these codes. In the first part of the paper, we develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) non-convex sparsity-inducing regularization and (iv) handles the partially observable case. To the best of our knowledge, current methods can exhibit two of these four desirable properties at most. We also investigate several interesting special cases of our framework and demonstrate its applicability in inpainting of natural signals, structured sparse non-negative matrix factorization of faces and collaborative filtering. Complementing the sparse direction we formulate a novel component-wise acting, epsilon-sparse coding scheme in reproducing kernel Hilbert spaces and show its equivalence to a generalized class of support vector machines. Moreover, we embed support vector machines to multilayer perceptrons and show that for this novel kernel based approximation approach the backpropagation procedure of multilayer perceptrons can be generalized. In the second part of the paper, we focus on dictionary learning making use of independent subspace assumption instead of structured sparsity. The corresponding problem is called independent subspace analysis (ISA), or independent component analysis (ICA) if all the hidden, independent sources are one-dimensional. One of the most fundamental results of this research field is the ISA separation principle, which states that the ISA problem can be solved by traditional ICA up to permutation. This principle (i) forms the basis of the state-of-the-art ISA solvers and (ii) enables one to estimate the unknown number and the dimensions of the sources efficiently. We (i) extend the ISA problem to several new directions including the controlled, the partially observed, the complex valued and the nonparametric case and (ii) derive separation principle based solution techniques for the generalizations. This solution approach (i) makes it possible to apply state-of-the-art algorithms for the obtained subproblems (in the ISA example ICA and clustering) and (ii) handles the case of unknown dimensional sources. Our extensive numerical experiments demonstrate the robustness and efficiency of our approach

    Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators

    Full text link
    We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k→∞k \to \infty as the sample size n→∞n \to \infty) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees.Comment: 16 pages, 0 figure
    corecore