86 research outputs found

    Low-complexity Multiclass Encryption by Compressed Sensing

    Get PDF
    The idea that compressed sensing may be used to encrypt information from unauthorised receivers has already been envisioned, but never explored in depth since its security may seem compromised by the linearity of its encoding process. In this paper we apply this simple encoding to define a general private-key encryption scheme in which a transmitter distributes the same encoded measurements to receivers of different classes, which are provided partially corrupted encoding matrices and are thus allowed to decode the acquired signal at provably different levels of recovery quality. The security properties of this scheme are thoroughly analysed: firstly, the properties of our multiclass encryption are theoretically investigated by deriving performance bounds on the recovery quality attained by lower-class receivers with respect to high-class ones. Then we perform a statistical analysis of the measurements to show that, although not perfectly secure, compressed sensing grants some level of security that comes at almost-zero cost and thus may benefit resource-limited applications. In addition to this we report some exemplary applications of multiclass encryption by compressed sensing of speech signals, electrocardiographic tracks and images, in which quality degradation is quantified as the impossibility of some feature extraction algorithms to obtain sensitive information from suitably degraded signal recoveries.Comment: IEEE Transactions on Signal Processing, accepted for publication. Article in pres

    On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis

    Get PDF
    Despite the linearity of its encoding, compressed sensing may be used to provide a limited form of data protection when random encoding matrices are used to produce sets of low-dimensional measurements (ciphertexts). In this paper we quantify by theoretical means the resistance of the least complex form of this kind of encoding against known-plaintext attacks. For both standard compressed sensing with antipodal random matrices and recent multiclass encryption schemes based on it, we show how the number of candidate encoding matrices that match a typical plaintext-ciphertext pair is so large that the search for the true encoding matrix inconclusive. Such results on the practical ineffectiveness of known-plaintext attacks underlie the fact that even closely-related signal recovery under encoding matrix uncertainty is doomed to fail. Practical attacks are then exemplified by applying compressed sensing with antipodal random matrices as a multiclass encryption scheme to signals such as images and electrocardiographic tracks, showing that the extracted information on the true encoding matrix from a plaintext-ciphertext pair leads to no significant signal recovery quality increase. This theoretical and empirical evidence clarifies that, although not perfectly secure, both standard compressed sensing and multiclass encryption schemes feature a noteworthy level of security against known-plaintext attacks, therefore increasing its appeal as a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for publication. Article in pres

    Event Encryption: Rethinking Privacy Exposure for Neuromorphic Imaging

    Full text link
    Bio-inspired neuromorphic cameras sense illumination changes on a per-pixel basis and generate spatiotemporal streaming events within microseconds in response, offering visual information with high temporal resolution over a high dynamic range. Such devices often serve in surveillance systems due to their applicability and robustness in environments with high dynamics and strong or weak lighting, where they can still supply clearer recordings than traditional imaging. In other words, when it comes to privacy-relevant cases, neuromorphic cameras also expose more sensitive data and thus pose serious security threats. Therefore, asynchronous event streams also necessitate careful encryption before transmission and usage. This letter discusses several potential attack scenarios and approaches event encryption from the perspective of neuromorphic noise removal, in which we inversely introduce well-crafted noise into raw events until they are obfuscated. Evaluations show that the encrypted events can effectively protect information from the attacks of low-level visual reconstruction and high-level neuromorphic reasoning, and thus feature dependable privacy-preserving competence. Our solution gives impetus to the security of event data and paves the way to a highly encrypted technique for privacy-protective neuromorphic imaging

    Sensor Signal and Information Processing II [Editorial]

    Get PDF
    This Special Issue compiles a set of innovative developments on the use of sensor signals and information processing. In particular, these contributions report original studies on a wide variety of sensor signals including wireless communication, machinery, ultrasound, imaging, and internet data, and information processing methodologies such as deep learning, machine learning, compressive sensing, and variational Bayesian. All these devices have one point in common: These algorithms have incorporated some form of computational intelligence as part of their core framework in problem solving. They have the capacity to generalize and discover knowledge for themselves, learning to learn new information whenever unseen data are captured

    Towards Authentication of IoMT Devices via RF Signal Classification

    Get PDF
    The increasing reliance on the Internet of Medical Things (IoMT) raises great concern in terms of cybersecurity, either at the device’s physical level or at the communication and transmission level. This is particularly important as these systems process very sensitive and private data, including personal health data from multiple patients such as real-time body measurements. Due to these concerns, cybersecurity mechanisms and strategies must be in place to protect these medical systems, defending them from compromising cyberattacks. Authentication is an essential cybersecurity technique for trustworthy IoMT communications. However, current authentication methods rely on upper-layer identity verification or key-based cryptography which can be inadequate to the heterogeneous Internet of Things (IoT) environments. This thesis proposes the development of a Machine Learning (ML) method that serves as a foundation for Radio Frequency Fingerprinting (RFF) in the authentication of IoMT devices in medical applications to improve the flexibility of such mechanisms. This technique allows the authentication of medical devices by their physical layer characteristics, i.e. of their emitted signal. The development of ML models serves as the foundation for RFF, allowing it to evaluate and categorise the released signal and enable RFF authentication. Multiple feature take part of the proposed decision making process of classifying the device, which then is implemented in a medical gateway, resulting in a novel IoMT technology.A confiança crescente na IoMT suscita grande preocupação em termos de cibersegurança, quer ao nível físico do dispositivo quer ao nível da comunicação e ao nível de transmissão. Isto é particularmente importante, uma vez que estes sistemas processam dados muito sensíveis e dados, incluindo dados pessoais de saúde de diversos pacientes, tais como dados em tempo real de medidas do corpo. Devido a estas preocupações, os mecanismos e estratégias de ciber-segurança devem estar em vigor para proteger estes sistemas médicos, defendendo-os de ciberataques comprometedores. A autenticação é uma técnica essencial de ciber-segurança para garantir as comunicações em sistemas IoMT de confiança. No entanto, os métodos de autenticação atuais focam-se na verificação de identidade na camada superior ou criptografia baseada em chaves que podem ser inadequadas para a ambientes IoMT heterogéneos. Esta tese propõe o desenvolvimento de um método de ML que serve como base para o RFF na autenticação de dispositivos IoMT para melhorar a flexibilidade de tais mecanismos. Isto permite a autenticação dos dispositivos médicos pelas suas características de camada física, ou seja, a partir do seu sinal emitido. O desenvolvimento de modelos de ML serve de base para o RFF, permitindo-lhe avaliar e categorizar o sinal libertado e permitir a autenticação do RFF. Múltiplas features fazem parte do processo de tomada de decisão proposto para classificar o dispositivo, que é implementada num gateway médico, resultando numa nova tecnologia IoMT

    Federated Learning in Medical Imaging:Part II: Methods, Challenges, and Considerations

    Get PDF
    Federated learning is a machine learning method that allows decentralized training of deep neural networks among multiple clients while preserving the privacy of each client's data. Federated learning is instrumental in medical imaging due to the privacy considerations of medical data. Setting up federated networks in hospitals comes with unique challenges, primarily because medical imaging data and federated learning algorithms each have their own set of distinct characteristics. This article introduces federated learning algorithms in medical imaging and discusses technical challenges and considerations of real-world implementation of them

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Cryptographic approaches to security and optimization in machine learning

    Get PDF
    Modern machine learning techniques have achieved surprisingly good standard test accuracy, yet classical machine learning theory has been unable to explain the underlying reason behind this success. The phenomenon of adversarial examples further complicates our understanding of what it means to have good generalization ability. Classifiers that generalize well to the test set are easily fooled by imperceptible image modifications, which can often be computed without knowledge of the classifier itself. The adversarial error of a classifier measures the error under which each test data point can be modified by an algorithm before it is given as input to the classifier. Followup work has showed that a tradeoff exists between optimizing for standard generalization error versus for adversarial error. This calls into question whether standard generalization error is the correct metric to measure. We try to understand the generalization capability of modern machine learning techniques through the lens of adversarial examples. To reconcile the apparent tradeoff between the two competing notions of error, we create new security definitions and classifier constructions which allow us to prove an upper bound on the adversarial error that decreases as standard test error decreases. We introduce a cryptographic proof technique by defining a security assumption in a simpler attack setting and proving a security reduction from a restricted black-box attack problem to this security assumption. We then investigate the double descent curve in the interpolation regime, where test error can continue to decrease even after training error has reached zero, to give a natural explanation for the observed tradeoff between adversarial error and standard generalization error. The second part of our work investigates further this notion of a black-box model by looking at the separation between being able to evaluate a function and being able to actually understand it. This is formalized through the notion of function obfuscation in cryptography. Given some concrete implementation of a function, the implementation is considered obfuscated if a user cannot produce the function output on a test input without querying the implementation itself. This means that a user cannot actually learn or understand the function even though all of the implementation details are presented in the clear. As expected this is a very strong requirement that does not exist for all functions one might be interested in. In our work we make progress on providing obfuscation schemes for simple, explicit function classes. The last part of our work investigates non-statistical biases and algorithms for nonconvex optimization problems. We show that the continuous-time limit of stochastic gradient descent does not converge directly to the local optimum, but rather has a bias term which grows with the step size. We also construct novel, non-statistical algorithms for two parametric learning problems by employing lattice basis reduction techniques from cryptography
    • …
    corecore