5,508 research outputs found

    Compression image sharing using DCT- Wavelet transform and coding by Blackely method

    Get PDF
    The increased use of computer and internet had been related to the wide use of multimedia information. The requirement forprotecting this information has risen dramatically. To prevent the confidential information from being tampered with, one needs toapply some cryptographic techniques. Most of cryptographic strategies have one similar weak point that is the information is centralized.To overcome this drawback the secret sharing was introduced. It’s a technique to distribute a secret among a group of members, suchthat every member owns a share of the secret; but only a particular combination of shares could reveal the secret. Individual sharesreveal nothing about the secret. The major challenge faces image secret sharing is the shadow size; that's the complete size of the lowestneeded of shares for revealing is greater than the original secret file. So the core of this work is to use different transform codingstrategies in order to get as much as possible the smallest share size. In this paper Compressive Sharing System for Images UsingTransform Coding and Blackely Method based on transform coding illustration are introduced. The introduced compressive secretsharing scheme using an appropriate transform (Discrete cosine transform and Wavelet) are applied to de-correlate the image samples,then feeding the output (i.e., compressed image data) to the diffusion scheme which is applied to remove any statistical redundancy orbits of important attribute that will exist within the compressed stream and in the last the (k, n) threshold secret sharing scheme, where nis the number of generated shares and k is the minimum needed shares for revealing. For making a certain high security level, eachproduced share is passed through stream ciphering depends on an individual encryption key belongs to the shareholder

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions

    Efficient Security Algorithm for Provisioning Constrained Internet of Things (IoT) Devices

    Get PDF
    Addressing the security concerns of constrained Internet of Things (IoT) devices, such as client- side encryption and secure provisioning remains a work in progress. IoT devices characterized by low power and processing capabilities do not exactly fit into the provisions of existing security schemes, as classical security algorithms are built on complex cryptographic functions that are too complex for constrained IoT devices. Consequently, the option for constrained IoT devices lies in either developing new security schemes or modifying existing ones as lightweight. This work presents an improved version of the Advanced Encryption Standard (AES) known as the Efficient Security Algorithm for Power-constrained IoT devices, which addressed some of the security concerns of constrained Internet of Things (IoT) devices, such as client-side encryption and secure provisioning. With cloud computing being the key enabler for the massive provisioning of IoT devices, encryption of data generated by IoT devices before onward transmission to cloud platforms of choice is being advocated via client-side encryption. However, coping with trade-offs remain a notable challenge with Lightweight algorithms, making the innovation of cheaper secu- rity schemes without compromise to security a high desirable in the secure provisioning of IoT devices. A cryptanalytic overview of the consequence of complexity reduction with mathematical justification, while using a Secure Element (ATECC608A) as a trade-off is given. The extent of constraint of a typical IoT device is investigated by comparing the Laptop/SAMG55 implemen- tations of the Efficient algorithm for constrained IoT devices. An analysis of the implementation and comparison of the Algorithm to lightweight algorithms is given. Based on experimentation results, resource constrain impacts a 657% increase in the encryption completion time on the IoT device in comparison to the laptop implementation; of the Efficient algorithm for Constrained IoT devices, which is 0.9 times cheaper than CLEFIA and 35% cheaper than the AES in terms of the encryption completion times, compared to current results in literature at 26%, and with a 93% of avalanche effect rate, well above a recommended 50% in literature. The algorithm is utilised for client-side encryption to provision the device onto AWS IoT core

    On the privacy risks of machine learning models

    Get PDF
    Machine learning (ML) has made huge progress in the last decade and has been applied to a wide range of critical applications. However, driven by the increasing adoption of machine learning models, the significance of privacy risks has become more crucial than ever. These risks can be classified into two categories depending on the role played by ML models: one in which the models themselves are vulnerable to leaking sensitive information, and the other in which the models are abused to violate privacy. In this dissertation, we investigate the privacy risks of machine learning models from two perspectives, i.e., the vulnerability of ML models and the abuse of ML models. To study the vulnerability of ML models to privacy risks, we conduct two studies on one of the most severe privacy attacks against ML models, namely the membership inference attack (MIA). Firstly, we explore membership leakage in label-only exposure of ML models. We present the first label-only membership inference attack and reveal that membership leakage is more severe than previously shown. Secondly, we perform the first privacy analysis of multi-exit networks through the lens of membership leakage. We leverage existing attack methodologies to quantify the vulnerability of multi-exit networks to membership inference attacks and propose a hybrid attack that exploits the exit information to improve the attack performance. From the perspective of abusing ML models to violate privacy, we focus on deepfake face manipulation that can create visual misinformation. We propose the first defense system \system against GAN-based face manipulation by jeopardizing the process of GAN inversion, which is an essential step for subsequent face manipulation. All findings contribute to the community's insight into the privacy risks of machine learning models. We appeal to the community's consideration of the in-depth investigation of privacy risks, like ours, against the rapidly-evolving machine learning techniques.Das maschinelle Lernen (ML) hat in den letzten zehn Jahren enorme Fortschritte gemacht und wurde für eine breite Palette wichtiger Anwendungen eingesetzt. Durch den zunehmenden Einsatz von Modellen des maschinellen Lernens ist die Bedeutung von Datenschutzrisiken jedoch wichtiger denn je geworden. Diese Risiken können je nach der Rolle, die ML-Modelle spielen, in zwei Kategorien eingeteilt werden: in eine, in der die Modelle selbst anfällig für das Durchsickern sensibler Informationen sind, und in die andere, in der die Modelle zur Verletzung der Privatsphäre missbraucht werden. In dieser Dissertation untersuchen wir die Datenschutzrisiken von Modellen des maschinellen Lernens aus zwei Blickwinkeln, nämlich der Anfälligkeit von ML-Modellen und dem Missbrauch von ML-Modellen. Um die Anfälligkeit von ML-Modellen für Datenschutzrisiken zu untersuchen, führen wir zwei Studien zu einem der schwerwiegendsten Angriffe auf den Datenschutz von ML-Modellen durch, nämlich dem Angriff auf die Mitgliedschaft (membership inference attack, MIA). Erstens erforschen wir das Durchsickern von Mitgliedschaften in ML-Modellen, die sich nur auf Labels beziehen. Wir präsentieren den ersten "label-only membership inference"-Angriff und stellen fest, dass das "membership leakage" schwerwiegender ist als bisher gezeigt. Zweitens führen wir die erste Analyse der Privatsphäre von Netzwerken mit mehreren Ausgängen durch die Linse des Mitgliedschaftsverlustes durch. Wir nutzen bestehende Angriffsmethoden, um die Anfälligkeit von Multi-Exit-Netzwerken für Membership-Inference-Angriffe zu quantifizieren und schlagen einen hybriden Angriff vor, der die Exit-Informationen ausnutzt, um die Angriffsleistung zu verbessern. Unter dem Gesichtspunkt des Missbrauchs von ML-Modellen zur Verletzung der Privatsphäre konzentrieren wir uns auf die Manipulation von Gesichtern, die visuelle Fehlinformationen erzeugen können. Wir schlagen das erste Abwehrsystem \system gegen GAN-basierte Gesichtsmanipulationen vor, indem wir den Prozess der GAN-Inversion gefährden, der ein wesentlicher Schritt für die anschließende Gesichtsmanipulation ist. Alle Ergebnisse tragen dazu bei, dass die Community einen Einblick in die Datenschutzrisiken von maschinellen Lernmodellen erhält. Wir appellieren an die Gemeinschaft, eine eingehende Untersuchung der Risiken für die Privatsphäre, wie die unsere, im Hinblick auf die sich schnell entwickelnden Techniken des maschinellen Lernens in Betracht zu ziehen
    • …
    corecore