154 research outputs found
On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis
Despite the linearity of its encoding, compressed sensing may be used to
provide a limited form of data protection when random encoding matrices are
used to produce sets of low-dimensional measurements (ciphertexts). In this
paper we quantify by theoretical means the resistance of the least complex form
of this kind of encoding against known-plaintext attacks. For both standard
compressed sensing with antipodal random matrices and recent multiclass
encryption schemes based on it, we show how the number of candidate encoding
matrices that match a typical plaintext-ciphertext pair is so large that the
search for the true encoding matrix inconclusive. Such results on the practical
ineffectiveness of known-plaintext attacks underlie the fact that even
closely-related signal recovery under encoding matrix uncertainty is doomed to
fail.
Practical attacks are then exemplified by applying compressed sensing with
antipodal random matrices as a multiclass encryption scheme to signals such as
images and electrocardiographic tracks, showing that the extracted information
on the true encoding matrix from a plaintext-ciphertext pair leads to no
significant signal recovery quality increase. This theoretical and empirical
evidence clarifies that, although not perfectly secure, both standard
compressed sensing and multiclass encryption schemes feature a noteworthy level
of security against known-plaintext attacks, therefore increasing its appeal as
a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for
publication. Article in pres
Low-complexity Multiclass Encryption by Compressed Sensing
The idea that compressed sensing may be used to encrypt information from
unauthorised receivers has already been envisioned, but never explored in depth
since its security may seem compromised by the linearity of its encoding
process. In this paper we apply this simple encoding to define a general
private-key encryption scheme in which a transmitter distributes the same
encoded measurements to receivers of different classes, which are provided
partially corrupted encoding matrices and are thus allowed to decode the
acquired signal at provably different levels of recovery quality.
The security properties of this scheme are thoroughly analysed: firstly, the
properties of our multiclass encryption are theoretically investigated by
deriving performance bounds on the recovery quality attained by lower-class
receivers with respect to high-class ones. Then we perform a statistical
analysis of the measurements to show that, although not perfectly secure,
compressed sensing grants some level of security that comes at almost-zero cost
and thus may benefit resource-limited applications.
In addition to this we report some exemplary applications of multiclass
encryption by compressed sensing of speech signals, electrocardiographic tracks
and images, in which quality degradation is quantified as the impossibility of
some feature extraction algorithms to obtain sensitive information from
suitably degraded signal recoveries.Comment: IEEE Transactions on Signal Processing, accepted for publication.
Article in pres
Function approximation using non-normalized SISO fuzzy systems
AbstractIn this paper we propose an improvement in the field of fuzzy function approximation. It is well known that tuning the shape and the position of the membership functions, improves the approximation, but what about changing the heights of these functions? Usually the system is normalized so that the heights of the membership functions are set to 1, but an interesting result can be obtained if we make them variable, giving a further degree of freedom to the fuzzy system. We will use this feature in order to achieve a better function approximation, to build a second-order derivative approximation or to make the derivative of our approximation continuous. We will show also how to increase the spectral purity of the approximation function as in the case of sinusoidal functions. This approach will be analyzed under a theoretical point of view, comparing the results with those obtained with the classical approach
Algorithmic fairness through group parities? The case of COMPAS-SAPMOC
Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals
Rakeness in the design of Analog-to-Information Conversion of Sparse and Localized Signals
Design of Random Modulation Pre-Integration systems based on the
restricted-isometry property may be suboptimal when the energy of the signals
to be acquired is not evenly distributed, i.e. when they are both sparse and
localized. To counter this, we introduce an additional design criterion, that
we call rakeness, accounting for the amount of energy that the measurements
capture from the signal to be acquired. Hence, for localized signals a proper
system tuning increases the rakeness as well as the average SNR of the samples
used in its reconstruction. Yet, maximizing average SNR may go against the need
of capturing all the components that are potentially non-zero in a sparse
signal, i.e., against the restricted isometry requirement ensuring
reconstructability. What we propose is to administer the trade-off between
rakeness and restricted isometry in a statistical way by laying down an
optimization problem. The solution of such an optimization problem is the
statistic of the process generating the random waveforms onto which the signal
is projected to obtain the measurements. The formal definition of such a
problems is given as well as its solution for signals that are either localized
in frequency or in more generic domain. Sample applications, to ECG signals and
small images of printed letters and numbers, show that rakeness-based design
leads to non-negligible improvements in both cases
A Non-conventional Sum-and-Max based Neural Network layer for Low Power Classification
The increasing need for small and low-power Deep Neural Networks (DNNs) for edge computing applications involves the investigation of new architectures that allow good performance on low-resources/mobile devices. To this aim, many different structures have been proposed in the literature, mainly targeting the reduction in the costs introduced by the Multiply and Accumulate (MAC) primitive. In this work, a DNN layer based on the novel Sum and Max (SAM) paradigm is proposed. It does not require either the use of multiplications or the insertion of complex non-linear operations. Furthermore, it is especially prone to aggressive pruning, thus needing a very low number of parameters to work. The layer is tested on a simple classification task and its cost is compared with a classic DNN layer with equivalent accuracy based on the MAC primitive, in order to assess the reduction of resources that the use of this new structure could introduce
- …