4,216 research outputs found

    Incoherent Dictionary Learning for Sparse Representation in Network Anomaly Detection

    Get PDF
    In this article we present the use of sparse representation of a signal and incoherent dictionary learning method for the purpose of network traffic analysis. In learning process we use 1D INK-SVD algorithm to detect proper dictionary structure. Anomaly detection is realized by parameter estimation of the analyzed signal and its comparative analysis to network traffic profiles. Efficiency of our method is examined with the use of extended set of test traces from real network traffic. Received experimental results confirm effectiveness of the presented method

    Sparse Approximation and Dictionary Learning with Applications to Audio Signals

    Get PDF
    PhDOver-complete transforms have recently become the focus of a wide wealth of research in signal processing, machine learning, statistics and related fields. Their great modelling flexibility allows to find sparse representations and approximations of data that in turn prove to be very efficient in a wide range of applications. Sparse models express signals as linear combinations of a few basis functions called atoms taken from a so-called dictionary. Finding the optimal dictionary from a set of training signals of a given class is the objective of dictionary learning and the main focus of this thesis. The experimental evidence presented here focuses on the processing of audio signals, and the role of sparse algorithms in audio applications is accordingly highlighted. The first main contribution of this thesis is the development of a pitch-synchronous transform where the frame-by-frame analysis of audio data is adapted so that each frame analysing periodic signals contains an integer number of periods. This algorithm presents a technique for adapting transform parameters to the audio signal to be analysed, it is shown to improve the sparsity of the representation if compared to a non pitchsynchronous approach and further evaluated in the context of source separation by binary masking. A second main contribution is the development of a novel model and relative algorithm for dictionary learning of convolved signals, where the observed variables are sparsely approximated by the atoms contained in a convolved dictionary. An algorithm is devised to learn the impulse response applied to the dictionary and experimental results on synthetic data show the superior approximation performance of the proposed method compared to a state-of-the-art dictionary learning algorithm. Finally, a third main contribution is the development of methods for learning dictionaries that are both well adapted to a training set of data and mutually incoherent. Two novel algorithms namely the incoherent k-svd and the iterative projections and rotations (ipr) algorithm are introduced and compared to different techniques published in the literature in a sparse approximation context. The ipr algorithm in particular is shown to outperform the benchmark techniques in learning very incoherent dictionaries while maintaining a good signal-to-noise ratio of the representation

    Learning incoherent dictionaries for sparse approximation using iterative projections and rotations

    Get PDF
    This work was supported by the Queen Mary University of London School Studentship, the EU FET-Open project FP7- ICT-225913-SMALL. Sparse Models, Algorithms and Learning for Large-scale data and a Leadership Fellowship from the UK Engineering and Physical Sciences Research Council (EPSRC)

    INK-SVD: LEARNING INCOHERENT DICTIONARIES FOR SPARSE REPRESENTATIONS

    Get PDF
    © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

    Full text link
    We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via 1\ell_1 minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is \order{1/s^2}, where ss is the sparsity level in each sample and the dictionary satisfies RIP. Combined with the recent results of approximate dictionary estimation, this yields provable guarantees for exact recovery of both the dictionary elements and the coefficients, when the dictionary elements are incoherent.Comment: Local linear convergence now holds under RIP and also more general restricted eigenvalue condition
    corecore