4,691 research outputs found
Compressive Signal Processing with Circulant Sensing Matrices
Compressive sensing achieves effective dimensionality reduction of signals,
under a sparsity constraint, by means of a small number of random measurements
acquired through a sensing matrix. In a signal processing system, the problem
arises of processing the random projections directly, without first
reconstructing the signal. In this paper, we show that circulant sensing
matrices allow to perform a variety of classical signal processing tasks such
as filtering, interpolation, registration, transforms, and so forth, directly
in the compressed domain and in an exact fashion, \emph{i.e.}, without relying
on estimators as proposed in the existing literature. The advantage of the
techniques presented in this paper is to enable direct
measurement-to-measurement transformations, without the need of costly recovery
procedures
A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images
Predictive coding is attractive for compression onboard of spacecrafts thanks
to its low computational complexity, modest memory requirements and the ability
to accurately control quality on a pixel-by-pixel basis. Traditionally,
predictive compression focused on the lossless and near-lossless modes of
operation where the maximum error can be bounded but the rate of the compressed
image is variable. Rate control is considered a challenging problem for
predictive encoders due to the dependencies between quantization and prediction
in the feedback loop, and the lack of a signal representation that packs the
signal's energy into few coefficients. In this paper, we show that it is
possible to design a rate control scheme intended for onboard implementation.
In particular, we propose a general framework to select quantizers in each
spatial and spectral region of an image so as to achieve the desired target
rate while minimizing distortion. The rate control algorithm allows to achieve
lossy, near-lossless compression, and any in-between type of compression, e.g.,
lossy compression with a near-lossless constraint. While this framework is
independent of the specific predictor used, in order to show its performance,
in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless
compression standard, obtaining an extension that allows to perform lossless,
near-lossless and lossy compression in a single package. We show that the rate
controller has excellent performance in terms of accuracy in the output rate,
rate-distortion characteristics and is extremely competitive with respect to
state-of-the-art transform coding
Image Denoising with Graph-Convolutional Neural Networks
Recovering an image from a noisy observation is a key problem in signal
processing. Recently, it has been shown that data-driven approaches employing
convolutional neural networks can outperform classical model-based techniques,
because they can capture more powerful and discriminative features. However,
since these methods are based on convolutional operations, they are only
capable of exploiting local similarities without taking into account non-local
self-similarities. In this paper we propose a convolutional neural network that
employs graph-convolutional layers in order to exploit both local and non-local
similarities. The graph-convolutional layers dynamically construct
neighborhoods in the feature space to detect latent correlations in the feature
maps produced by the hidden layers. The experimental results show that the
proposed architecture outperforms classical convolutional neural networks for
the denoising task.Comment: IEEE International Conference on Image Processing (ICIP) 201
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
Joint recovery algorithms using difference of innovations for distributed compressed sensing
Distributed compressed sensing is concerned with representing an ensemble of
jointly sparse signals using as few linear measurements as possible. Two novel
joint reconstruction algorithms for distributed compressed sensing are
presented in this paper. These algorithms are based on the idea of using one of
the signals as side information; this allows to exploit joint sparsity in a
more effective way with respect to existing schemes. They provide gains in
reconstruction quality, especially when the nodes acquire few measurements, so
that the system is able to operate with fewer measurements than is required by
other existing schemes. We show that the algorithms achieve better performance
with respect to the state-of-the-art.Comment: Conference Record of the Forty Seventh Asilomar Conference on
Signals, Systems and Computers (ASILOMAR), 201
Sampling of graph signals via randomized local aggregations
Sampling of signals defined over the nodes of a graph is one of the crucial
problems in graph signal processing. While in classical signal processing
sampling is a well defined operation, when we consider a graph signal many new
challenges arise and defining an efficient sampling strategy is not
straightforward. Recently, several works have addressed this problem. The most
common techniques select a subset of nodes to reconstruct the entire signal.
However, such methods often require the knowledge of the signal support and the
computation of the sparsity basis before sampling. Instead, in this paper we
propose a new approach to this issue. We introduce a novel technique that
combines localized sampling with compressed sensing. We first choose a subset
of nodes and then, for each node of the subset, we compute random linear
combinations of signal coefficients localized at the node itself and its
neighborhood. The proposed method provides theoretical guarantees in terms of
reconstruction and stability to noise for any graph and any orthonormal basis,
even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks,
201
Sequeval: A Framework to Assess and Benchmark Sequence-based Recommender Systems
In this paper, we present sequeval, a software tool capable of performing the
offline evaluation of a recommender system designed to suggest a sequence of
items. A sequence-based recommender is trained considering the sequences
already available in the system and its purpose is to generate a personalized
sequence starting from an initial seed. This tool automatically evaluates the
sequence-based recommender considering a comprehensive set of eight different
metrics adapted to the sequential scenario. sequeval has been developed
following the best practices of software extensibility. For this reason, it is
possible to easily integrate and evaluate novel recommendation techniques.
sequeval is publicly available as an open source tool and it aims to become a
focal point for the community to assess sequence-based recommender systems.Comment: REVEAL 2018 Workshop on Offline Evaluation for Recommender System
Information Recovery In Behavioral Networks
In the context of agent based modeling and network theory, we focus on the
problem of recovering behavior-related choice information from
origin-destination type data, a topic also known under the name of network
tomography. As a basis for predicting agents' choices we emphasize the
connection between adaptive intelligent behavior, causal entropy maximization
and self-organized behavior in an open dynamic system. We cast this problem in
the form of binary and weighted networks and suggest information theoretic
entropy-driven methods to recover estimates of the unknown behavioral flow
parameters. Our objective is to recover the unknown behavioral values across
the ensemble analytically, without explicitly sampling the configuration space.
In order to do so, we consider the Cressie-Read family of entropic functionals,
enlarging the set of estimators commonly employed to make optimal use of the
available information. More specifically, we explicitly work out two cases of
particular interest: Shannon functional and the likelihood functional. We then
employ them for the analysis of both univariate and bivariate data sets,
comparing their accuracy in reproducing the observed trends.Comment: 14 pages, 6 figures, 4 table
- …