1,828 research outputs found
On Intercept Probability Minimization under Sparse Random Linear Network Coding
This paper considers a network where a node wishes to transmit a source
message to a legitimate receiver in the presence of an eavesdropper. The
transmitter secures its transmissions employing a sparse implementation of
Random Linear Network Coding (RLNC). A tight approximation to the probability
of the eavesdropper recovering the source message is provided. The proposed
approximation applies to both the cases where transmissions occur without
feedback or where the reliability of the feedback channel is impaired by an
eavesdropper jamming the feedback channel. An optimization framework for
minimizing the intercept probability by optimizing the sparsity of the RLNC is
also presented. Results validate the proposed approximation and quantify the
gain provided by our optimization over solutions where non-sparse RLNC is used.Comment: To appear on IEEE Transactions on Vehicular Technolog
Task-Driven Dictionary Learning
Modeling data with linear combinations of a few elements from a learned
dictionary has been the focus of much recent research in machine learning,
neuroscience and signal processing. For signals such as natural images that
admit such sparse representations, it is now well established that these models
are well suited to restoration tasks. In this context, learning the dictionary
amounts to solving a large-scale matrix factorization problem, which can be
done efficiently with classical optimization tools. The same approach has also
been used for learning features from data for other purposes, e.g., image
classification, but tuning the dictionary in a supervised way for these tasks
has proven to be more difficult. In this paper, we present a general
formulation for supervised dictionary learning adapted to a wide variety of
tasks, and present an efficient algorithm for solving the corresponding
optimization problem. Experiments on handwritten digit classification, digital
art identification, nonlinear inverse image problems, and compressed sensing
demonstrate that our approach is effective in large-scale settings, and is well
suited to supervised and semi-supervised classification, as well as regression
tasks for data that admit sparse representations.Comment: final draft post-refereein
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
The impact of partial packet recovery on the inherent secrecy of random linear coding
This paper considers a source, which employs random linear coding (RLC) to encode a message, a legitimate destination, which can recover the message if it gathers a sufficient number of coded packets, and an eavesdropper. The probability of the eavesdropper accumulating enough coded packets to recover the message, known as the intercept probability, has been studied in the literature. In our work, the eavesdropper does not abandon its efforts to obtain the source message if RLC decoding has been unsuccessful; instead, it employs partial packet recovery (PPR) offline in an effort to repair erroneously received coded packets before it attempts RLC decoding again. Results show that PPR-assisted RLC decoding marginally increases the intercept probability, compared to RLC decoding, when the channel conditions are good. However, as the channel conditions deteriorate, PPR-assisted RLC decoding significantly improves the chances of the eavesdropper recovering the source message, even if the eavesdropper experiences similar or worse channel conditions than the destination
Recommended from our members
Machine learning phases in statistical physics
Conventionally, the study of phases in statistical mechan- ics is performed with the help of random sampling tools. Among the most powerful are Monte Carlo simulations consisting of a stochastic importance sampling over state space and evaluation of estimators for physical quantities. The ability of modern machine learning techniques to classify, identify, or in- terpret massive data sets provides a complementary paradigm to the above approach to analyze the exponentially large number of states in statistical physics. In this report, it is demonstrated by application on Ising-type models that deep learning has potential wide applications in solving many-body statis- tical physics problems. In application of supervised learning, we showed that the feed-forward neural network can identify phases and phase transitions in the ferromagnetic Ising model and the convolutional neural network (CNN) is extremely powerful in classifying T = 0 and T = ∞ phases in the Ising gauge model; In application of unsupervised learning, we illustrated that a deep auto-encoder constructed by stacked restricted Boltzmann machines (RBM)
is closely related to the renormalization group (RG) method well understood in modern physics and our reconstruction of Ising spin configurations in the ferromagnetic Ising model is similar to the hand-written digits reconstruction.Statistic
Lecture notes on ridge regression
The linear regression model cannot be fitted to high-dimensional data, as the
high-dimensionality brings about empirical non-identifiability. Penalized
regression overcomes this non-identifiability by augmentation of the loss
function by a penalty (i.e. a function of regression coefficients). The ridge
penalty is the sum of squared regression coefficients, giving rise to ridge
regression. Here many aspect of ridge regression are reviewed e.g. moments,
mean squared error, its equivalence to constrained estimation, and its relation
to Bayesian regression. Finally, its behaviour and use are illustrated in
simulation and on omics data. Subsequently, ridge regression is generalized to
allow for a more general penalty. The ridge penalization framework is then
translated to logistic regression and its properties are shown to carry over.
To contrast ridge penalized estimation, the final chapter introduces its lasso
counterpart
- …