43 research outputs found

    Dynamic virtual cluster cloud security using hybrid steganographic image authentication algorithm

    Get PDF
    Storing data in a third party cloud system causes serious problems on data confidentiality. Generally, encryption techniques provide data confidentiality but with limited functionality, which occurs due to unsupported actions of encryption operation in cloud storage space. Hence, developing a decentralized secure storage system with multiple support functions like encryption, encoding, and forwarding tends to get complicated, when the storage system spreads. This paper aims mainly on hiding image information using specialized steganographic image authentication (SSIA) algorithm in clustered cloud systems. The SSIA algorithm is applied to virtual elastic clusters in a public cloud platform. Here, the SSIA algorithm embeds the image information using blowfish algorithm and genetic operators. Initially, the blowfish symmetric block encryption is applied over the image and then the genetic operator is applied to re-encrypt the image information. The proposed algorithm provides an improved security than conventional blowfish algorithm in a clustered cloud system

    Triple scheme based on image steganography to improve imperceptibility and security

    Get PDF
    A foremost priority in the information technology and communication era is achieving an effective and secure steganography scheme when considering information hiding. Commonly, the digital images are used as the cover for the steganography owing to their redundancy in the representation, making them hidden to the intruders. Nevertheless, any steganography system launched over the internet can be attacked upon recognizing the stego cover. Presently, the design and development of an effective image steganography system are facing several challenging issues including the low capacity, poor security, and imperceptibility. Towards overcoming the aforementioned issues, a new decomposition scheme was proposed for image steganography with a new approach known as a Triple Number Approach (TNA). In this study, three main stages were used to achieve objectives and overcome the issues of image steganography, beginning with image and text preparation, followed by embedding and culminating in extraction. Finally, the evaluation stage employed several evaluations in order to benchmark the results. Different contributions were presented with this study. The first contribution was a Triple Text Coding Method (TTCM), which was related to the preparation of secret messages prior to the embedding process. The second contribution was a Triple Embedding Method (TEM), which was related to the embedding process. The third contribution was related to security criteria which were based on a new partitioning of an image known as the Image Partitioning Method (IPM). The IPM proposed a random pixel selection, based on image partitioning into three phases with three iterations of the Hénon Map function. An enhanced Huffman coding algorithm was utilized to compress the secret message before TTCM process. A standard dataset from the Signal and Image Processing Institute (SIPI) containing color and grayscale images with 512 x 512 pixels were utilised in this study. Different parameters were used to test the performance of the proposed scheme based on security and imperceptibility (image quality). In image quality, four important measurements that were used are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Square Error (MSE) and Histogram analysis. Whereas, two security measurements that were used are Human Visual System (HVS) and Chi-square (X2) attacks. In terms of PSNR and SSIM, the Lena grayscale image obtained results were 78.09 and 1 dB, respectively. Meanwhile, the HVS and X2 attacks obtained high results when compared to the existing scheme in the literature. Based on the findings, the proposed scheme give evidence to increase capacity, imperceptibility, and security to overcome existing issues

    A NEW AND ADAPTIVE SECURITY MODEL FOR PUBLIC COMMUNICATION BASED ON CORRELATION OF DATA FRAMES

    Get PDF
    Recent developments in communication and information technologies, plus the emerging of the Internet of Things (IoT) and machine to machine (M2M) principles, create the need to protect data from multiple types of attacks. In this paper, a secure and high capacity data communication model is proposed to protect the transmitted data based on identical frames between a secret and cover data. In this model, the cover data does not convey any embedded data (as in normal steganography system) or modify the secret message (as in traditional cryptography techniques). Alternatively, the proposed model sends the positions of the cover frames that are identical with the secret frames to the receiver side in order to recover the secret message. One of the significant advantages of the proposed model is the size of the secret key message which is considerably larger than the cover size, it may be even hundred times larger. Accordingly, the experimental results demonstrate a superior performance in terms of the capacity rate as compared to the traditional steganography techniques. Moreover, it has an advantage in terms of the required bandwidth to send the data or the required memory for saving when compared to the steganography methods, which need a bandwidth or memory up to 3-5 times of the original secret message. Where the length of the secret key (positions of the identical frames) that should be sent to the receiver increases by only 25% from the original secret message. This model is suitable for applications with a high level of security, high capacity rate and less bandwidth of communication or low storage devices

    Secure Surveillance Framework for IoT Systems Using Probabilistic Image Encryption

    Full text link
    [EN] This paper proposes a secure surveillance framework for Internet of things (IoT) systems by intelligent integration of video summarization and image encryption. First, an efficient video summarization method is used to extract the informative frames using the processing capabilities of visual sensors. When an event is detected from keyframes, an alert is sent to the concerned authority autonomously. As the final decision about an event mainly depends on the extracted keyframes, their modification during transmission by attackers can result in severe losses. To tackle this issue, we propose a fast probabilistic and lightweight algorithm for the encryption of keyframes prior to transmission, considering the memory and processing requirements of constrained devices that increase its suitability for IoT systems. Our experimental results verify the effectiveness of the proposed method in terms of robustness, execution time, and security compared to other image encryption algorithms. Furthermore, our framework can reduce the bandwidth, storage, transmission cost, and the time required for analysts to browse large volumes of surveillance data and make decisions about abnormal events, such as suspicious activity detection and fire detection in surveillance applications.This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2016R1A2B4011712). Paper no. TII-17-2066.Muhammad, K.; Hamza, R.; Ahmad, J.; Lloret, J.; Wang, H.; Baik, SW. (2018). Secure Surveillance Framework for IoT Systems Using Probabilistic Image Encryption. IEEE Transactions on Industrial Informatics. 14(8):3679-3689. https://doi.org/10.1109/TII.2018.2791944S3679368914

    An improved image steganography scheme based on distinction grade value and secret message encryption

    Get PDF
    Steganography is an emerging and greatly demanding technique for secure information communication over the internet using a secret cover object. It can be used for a wide range of applications such as safe circulation of secret data in intelligence, industry, health care, habitat, online voting, mobile banking and military. Commonly, digital images are used as covers for the steganography owing to their redundancy in the representation, making them hidden to the intruders, hackers, adversaries, unauthorized users. Still, any steganography system launched over the Internet can be cracked upon recognizing the stego cover. Thus, the undetectability that involves data imperceptibility or concealment and security is the significant trait of any steganography system. Presently, the design and development of an effective image steganography system are facing several challenges including low capacity, poor robustness and imperceptibility. To surmount such limitations, it is important to improve the capacity and security of the steganography system while maintaining a high signal-to-noise ratio (PSNR). Based on these factors, this study is aimed to design and develop a distinction grade value (DGV) method to effectively embed the secret data into a cover image for achieving a robust steganography scheme. The design and implementation of the proposed scheme involved three phases. First, a new encryption method called the shuffle the segments of secret message (SSSM) was incorporated with an enhanced Huffman compression algorithm to improve the text security and payload capacity of the scheme. Second, the Fibonacci-based image transformation decomposition method was used to extend the pixel's bit from 8 to 12 for improving the robustness of the scheme. Third, an improved embedding method was utilized by integrating a random block/pixel selection with the DGV and implicit secret key generation for enhancing the imperceptibility of the scheme. The performance of the proposed scheme was assessed experimentally to determine the imperceptibility, security, robustness and capacity. The standard USC-SIPI images dataset were used as the benchmarking for the performance evaluation and comparison of the proposed scheme with the previous works. The resistance of the proposed scheme was tested against the statistical, X2 , Histogram and non-structural steganalysis detection attacks. The obtained PSNR values revealed the accomplishment of higher imperceptibility and security by the proposed DGV scheme while a higher capacity compared to previous works. In short, the proposed steganography scheme outperformed the commercially available data hiding schemes, thereby resolved the existing issues

    Object Tracking in Vary Lighting Conditions for Fog based Intelligent Surveillance of Public Spaces

    Get PDF
    With rapid development of computer vision and artificial intelligence, cities are becoming more and more intelligent. Recently, since intelligent surveillance was applied in all kind of smart city services, object tracking attracted more attention. However, two serious problems blocked development of visual tracking in real applications. The first problem is its lower performance under intense illumination variation while the second issue is its slow speed. This paper addressed these two problems by proposing a correlation filter based tracker. Fog computing platform was deployed to accelerate the proposed tracking approach. The tracker was constructed by multiple positions' detections and alternate templates (MPAT). The detection position was repositioned according to the estimated speed of target by optical flow method, and the alternate template was stored with a template update mechanism, which were all computed at the edge. Experimental results on large-scale public benchmark datasets showed the effectiveness of the proposed method in comparison with state-of-the-art methods

    Secure and Robust Fragile Watermarking Scheme for Medical Images

    Get PDF
    Over the past decade advances in computer-based communication and health services, the need for image security becomes urgent to address the requirements of both safety and non-safety in medical applications. This paper proposes a new fragile watermarking based scheme for image authentication and self-recovery for medical applications. The proposed scheme locates image tampering as well as recovers the original image. A host image is broken into 4×4 blocks and Singular Value Decomposition (SVD) is applied by inserting the traces of block wise SVD into the Least Significant Bit (LSB) of the image pixels to figure out the transformation in the original image. Two authentication bits namely block authentication and self-recovery bits were used to survive the vector quantization attack. The insertion of self-recovery bits is determined with Arnold transformation, which recovers the original image even after a high tampering rate. SVD-based watermarking information improves the image authentication and provides a way to detect different attacked area. The proposed scheme is tested against different types of attacks such are text removal attack, text insertion attack, and copy and paste attack

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions
    corecore