594 research outputs found

    RAISE THE STRENGTH OF CRYPTOSYSTEM USING VARIETY EFFECTS

    Get PDF
    Nowadays, images became one of the most types of transmitted data through Internet. Some of these images carry secret information, therefore having an effective cryptosystem for hiding the information inside these images become an urgent need. Many traditional encryption methods are unable to achieve a high degree of protection for information. This paper presents nontraditional method for image encryption through applying substitution and transposition operations in different ways on both key and data. Series of (Linear and Circular), (Left, Right, Up and Down) rotation operations on (Bits and Bytes) of key and data are applied to make good confusion effects in the data. Moreover, XOR Boolean operations also applied on the key and data to make diffusion effects in the data. These two types of operations will produce a large set of keys. Using this large number of different keys in encrypting image will raise the strength of the encryption system used and achieve a high degree of protection for image. To test the security degree and performance of the encryption system, the system has been applied using different images and analyzing the results key space, key sensitivity, and statistical analysis and other criteria. From these tests, we can conclude that the encryption system can be used effectively to protect digital images

    RAISE THE STRENGTH OF CRYPTOSYSTEM USING VARIETY EFFECTS

    Get PDF
    Nowadays, images became one of the most types of transmitted data through Internet. Some of these images carry secret information, therefore having an effective cryptosystem for hiding the information inside these images become an urgent need. Many traditional encryption methods are unable to achieve a high degree of protection for information. This paper presents nontraditional method for image encryption through applying substitution and transposition operations in different ways on both key and data. Series of (Linear and Circular), (Left, Right, Up and Down) rotation operations on (Bits and Bytes) of key and data are applied to make good confusion effects in the data. Moreover, XOR Boolean operations also applied on the key and data to make diffusion effects in the data. These two types of operations will produce a large set of keys. Using this large number of different keys in encrypting image will raise the strength of the encryption system used and achieve a high degree of protection for image. To test the security degree and performance of the encryption system, the system has been applied using different images and analyzing the results key space, key sensitivity, and statistical analysis and other criteria. From these tests, we can conclude that the encryption system can be used effectively to protect digital images

    Key Generation Technique based on Triangular Coordinate Extraction for Hybrid Cubes

    Get PDF
    Cryptographic algorithms play an important role in information security where it ensures the security of data across the network or storage. The generation of Hybrid Cubes (HC) based on permutation and combination of integer numbers are utilized in the construction of encryption and decryption key in the non-binary block cipher. In this study, we extend the hybrid cube encryption algorithm (HiSea) and our earlier Triangular Coordinate Extraction (TCE) technique for HC by increasing the complexity in the mathematical approaches. We proposed a new key generation technique based on TCE for the security of data. In this regard, the Hybrid Cube surface (HCs) is divided into four quarters by the intersection of primary and secondary diagonal and each quarter is rotated by using the rotation points. The overall security of HC is improved by the rotation of HCs and enhanced the complexity in the design of key schedule algorithm. The brute force and entropy test are applied in experimental results which proved that the proposed technique is suitable for implementing a key generation technique and free from any predicted keys pattern

    Single-shot compressed ultrafast photography: a review

    Get PDF
    Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    PIRANHA: an engine for a methodology of detecting covert communication via image-based steganography

    Get PDF
    In current cutting-edge steganalysis research, model-building and machine learning has been utilized to detect steganography. However, these models are computationally and cognitively cumbersome, and are specifically and exactly targeted to attack one and only one type of steganography. The model built and utilized in this thesis has shown capability in detecting a class or family of steganography, while also demonstrating that it is viable to construct a minimalist model for steganalysis. The notion of detecting steganographic primitives or families is one that has not been discussed in literature, and would serve well as a first-pass steganographic detection methodology. The model built here serves this end well, and it must be kept in mind that the model presented is posited to work as a front-end broad-pass filter for some of the more computationally advanced and directed stganalytic algorithms currently in use. This thesis attempts to convey a view of steganography and steganalysis in a manner more utilitarian and immediately useful to everyday scenarios. This is vastly different from a good many publications that treat the topic as one relegated only to cloak-and-dagger information passing. The subsequent view of steganography as primarily a communications tool useable by petty information brokers and the like directs the text and helps ensure that the notion of steganography as a digital dead-drop box is abandoned in favor of a more grounded approach. As such, the model presented underperforms specialized models that have been presented in current literature, but also makes use of a large image sample space (747 images) as well as images that are contextually diverse and representative of those seen in wide use. In future applications by either law-enforcement or corporate officials, it is hoped that the model presented in this thesis can aid in rapid and targeted responses without causing undue strain upon an eventual human operator. As such, a design constraint that was utilized for this research favored a False Negative as opposed to a False Positive - this methodology helps to ensure that, in the event of an alert, it is worthwhile to apply a more directed attack against the flagged image

    A practical comparison between two powerful PCC codec’s

    Get PDF
    Recent advances in the consumption of 3D content creates the necessity of efficient ways to visualize and transmit 3D content. As a result, methods to obtain that same content have been evolving, leading to the development of new methods of representations, namely point clouds and light fields. A point cloud represents a set of points with associated Cartesian coordinates associated with each point(x, y, z), as well as being able to contain even more information inside that point (color, material, texture, etc). This kind of representation changes the way on how 3D content in consumed, having a wide range of applications, from videogaming to medical ones. However, since this type of data carries so much information within itself, they are data-heavy, making the storage and transmission of content a daunting task. To resolve this issue, MPEG created a point cloud coding normalization project, giving birth to V-PCC (Video-based Point Cloud Coding) and G-PCC (Geometry-based Point Cloud Coding) for static content. Firstly, a general analysis of point clouds is made, spanning from their possible solutions, to their acquisition. Secondly, point cloud codecs are studied, namely VPCC and G-PCC from MPEG. Then, a state of art study of quality evaluation is performed, namely subjective and objective evaluation. Finally, a report on the JPEG Pleno Point Cloud, in which an active colaboration took place, is made, with the comparative results of the two codecs and used metrics.Os avanços recentes no consumo de conteúdo 3D vêm criar a necessidade de maneiras eficientes de visualizar e transmitir conteúdo 3D. Consequentemente, os métodos de obtenção desse mesmo conteúdo têm vindo a evoluir, levando ao desenvolvimento de novas maneiras de representação, nomeadamente point clouds e lightfields. Um point cloud (núvem de pontos) representa um conjunto de pontos com coordenadas cartesianas associadas a cada ponto (x, y, z), além de poder conter mais informação dentro do mesmo (cor, material, textura, etc). Este tipo de representação abre uma nova janela na maneira como se consome conteúdo 3D, tendo um elevado leque de aplicações, desde videojogos e realidade virtual a aplicações médicas. No entanto, este tipo de dados, ao carregarem com eles tanta informação, tornam-se incrivelmente pesados, tornando o seu armazenamento e transmissão uma tarefa hercúleana. Tendo isto em mente, a MPEG criou um projecto de normalização de codificação de point clouds, dando origem ao V-PCC (Video-based Point Cloud Coding) e G-PCC (Geometry-based Point Cloud Coding) para conteúdo estático. Esta dissertação tem como objectivo uma análise geral sobre os point clouds, indo desde as suas possívei utilizações à sua aquisição. Seguidamente, é efectuado um estudo dos codificadores de point clouds, nomeadamente o V-PCC e o G-PCC da MPEG, o estado da arte da avaliação de qualidade, objectiva e subjectiva, e finalmente, são reportadas as actividades da JPEG Pleno Point Cloud, na qual se teve uma colaboração activa
    corecore