1,828 research outputs found

    On Intercept Probability Minimization under Sparse Random Linear Network Coding

    Get PDF
    This paper considers a network where a node wishes to transmit a source message to a legitimate receiver in the presence of an eavesdropper. The transmitter secures its transmissions employing a sparse implementation of Random Linear Network Coding (RLNC). A tight approximation to the probability of the eavesdropper recovering the source message is provided. The proposed approximation applies to both the cases where transmissions occur without feedback or where the reliability of the feedback channel is impaired by an eavesdropper jamming the feedback channel. An optimization framework for minimizing the intercept probability by optimizing the sparsity of the RLNC is also presented. Results validate the proposed approximation and quantify the gain provided by our optimization over solutions where non-sparse RLNC is used.Comment: To appear on IEEE Transactions on Vehicular Technolog

    On Intercept Probability Minimization under Sparse Random Linear Network Coding

    Get PDF

    Task-Driven Dictionary Learning

    Get PDF
    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.Comment: final draft post-refereein

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    The impact of partial packet recovery on the inherent secrecy of random linear coding

    Get PDF
    This paper considers a source, which employs random linear coding (RLC) to encode a message, a legitimate destination, which can recover the message if it gathers a sufficient number of coded packets, and an eavesdropper. The probability of the eavesdropper accumulating enough coded packets to recover the message, known as the intercept probability, has been studied in the literature. In our work, the eavesdropper does not abandon its efforts to obtain the source message if RLC decoding has been unsuccessful; instead, it employs partial packet recovery (PPR) offline in an effort to repair erroneously received coded packets before it attempts RLC decoding again. Results show that PPR-assisted RLC decoding marginally increases the intercept probability, compared to RLC decoding, when the channel conditions are good. However, as the channel conditions deteriorate, PPR-assisted RLC decoding significantly improves the chances of the eavesdropper recovering the source message, even if the eavesdropper experiences similar or worse channel conditions than the destination

    Lecture notes on ridge regression

    Full text link
    The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e. a function of regression coefficients). The ridge penalty is the sum of squared regression coefficients, giving rise to ridge regression. Here many aspect of ridge regression are reviewed e.g. moments, mean squared error, its equivalence to constrained estimation, and its relation to Bayesian regression. Finally, its behaviour and use are illustrated in simulation and on omics data. Subsequently, ridge regression is generalized to allow for a more general penalty. The ridge penalization framework is then translated to logistic regression and its properties are shown to carry over. To contrast ridge penalized estimation, the final chapter introduces its lasso counterpart
    corecore