1,849 research outputs found

    A new (k,n) verifiable secret image sharing scheme (VSISS)

    Get PDF
    AbstractIn this paper, a new (k,n) verifiable secret image sharing scheme (VSISS) is proposed in which third order LFSR (linear-feedback shift register)-based public key cryptosystem is applied for the cheating prevention and preview before decryption. In the proposed scheme the secret image is first partitioned into several non-overlapping blocks of k pixels. Every k pixel is then used to form m=⌈k/4⌉+1 pixels of one encrypted share. The original secret image can be reconstructed by gathering any k or more encrypted shared images. The experimental results show that the proposed VSISS is an efficient and safe method

    Reversible Multiple Image Secret Sharing Using Discrete Haar Wavelet Transform

    Get PDF
    Multiple Secret Image Sharing scheme is a protected approach to transmit more than one secret image over a communication channel. Conventionally, only single secret image is shared over a channel at a time. But as technology grew up, there is a need to share more than one secret image. A fast (r, n) multiple secret image sharing scheme based on discrete haar wavelet transform has been proposed to encrypt m secret images into n noisy images that are stored over different servers. To recover m secret images r noise images are required. Haar Discrete Wavelet Transform (DWT) is employed as reduction process of each secret image to its quarter size (i.e., LL subband). The LL subbands for all secrets have been combined in one secret that will be split later into r subblocks randomly using proposed high pseudo random generator. Finally, a developed (r, n) threshold multiple image secret sharing based one linear system has been used to generate unrelated shares. The experimental results showed that the generated shares are more secure and unrelated. The size reductions of generated shares were 1:4r of the size of each of original image. Also, the randomness test shows a good degree of randomness and security

    Adaptive Traffic Fingerprinting for Darknet Threat Intelligence

    Full text link
    Darknet technology such as Tor has been used by various threat actors for organising illegal activities and data exfiltration. As such, there is a case for organisations to block such traffic, or to try and identify when it is used and for what purposes. However, anonymity in cyberspace has always been a domain of conflicting interests. While it gives enough power to nefarious actors to masquerade their illegal activities, it is also the cornerstone to facilitate freedom of speech and privacy. We present a proof of concept for a novel algorithm that could form the fundamental pillar of a darknet-capable Cyber Threat Intelligence platform. The solution can reduce anonymity of users of Tor, and considers the existing visibility of network traffic before optionally initiating targeted or widespread BGP interception. In combination with server HTTP response manipulation, the algorithm attempts to reduce the candidate data set to eliminate client-side traffic that is most unlikely to be responsible for server-side connections of interest. Our test results show that MITM manipulated server responses lead to expected changes received by the Tor client. Using simulation data generated by shadow, we show that the detection scheme is effective with false positive rate of 0.001, while sensitivity detecting non-targets was 0.016+-0.127. Our algorithm could assist collaborating organisations willing to share their threat intelligence or cooperate during investigations.Comment: 26 page

    Visual secret sharing and related Works -A Review

    Get PDF
    The accelerated development of network technology and internet applications has increased the significance of protecting digital data and images from unauthorized access and manipulation. The secret image-sharing network (SIS) is a crucial technique used to protect private digital photos from illegal editing and copying. SIS can be classified into two types: single-secret sharing (SSS) and multi-secret sharing (MSS). In SSS, a single secret image is divided into multiple shares, while in MSS, multiple secret images are divided into multiple shares. Both SSS and MSS ensure that the original secret images cannot be reconstructed without the correct combination of shares. Therefore, several secret image-sharing methods have been developed depending on these two methods for example visual cryptography, steganography, discrete wavelet transform, watermarking, and threshold. All of these techniques are capable of randomly dividing the secret image into a large number of shares, each of which cannot provide any information to the intrusion team.  This study examined various visual secret-sharing schemes as unique examples of participant secret-sharing methods. Several structures that generalize and enhance VSS were also discussed in this study on covert image-sharing protocols and also this research also gives a comparative analysis of several methods based on various attributes in order to better concentrate on the future directions of the secret image. Generally speaking, the image quality generated employing developed methodologies is preferable to the image quality achieved through using the traditional visual secret-sharing methodology

    Secret Sharing Approach for Securing Cloud-Based Image Processing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning

    Full text link
    Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
    corecore