480 research outputs found

    Enhanced image encryption scheme with new mapreduce approach for big size images

    Get PDF
    Achieving a secured image encryption (IES) scheme for sensitive and confidential data communications, especially in a Hadoop environment is challenging. An accurate and secure cryptosystem for colour images requires the generation of intricate secret keys that protect the images from diverse attacks. To attain such a goal, this work proposed an improved shuffled confusion-diffusion based colour IES using a hyper-chaotic plain image. First, five different sequences of random numbers were generated. Then, two of the sequences were used to shuffle the image pixels and bits, while the remaining three were used to XOR the values of the image pixels. Performance of the developed IES was evaluated in terms of various measures such as key space size, correlation coefficient, entropy, mean squared error (MSE), peak signal to noise ratio (PSNR) and differential analysis. Values of correlation coefficient (0.000732), entropy (7.9997), PSNR (7.61), and MSE (11258) were determined to be better (against various attacks) compared to current existing techniques. The IES developed in this study was found to have outperformed other comparable cryptosystems. It is thus asserted that the developed IES can be advantageous for encrypting big data sets on parallel machines. Additionally, the developed IES was also implemented on a Hadoop environment using MapReduce to evaluate its performance against known attacks. In this process, the given image was first divided and characterized in a key-value format. Next, the Map function was invoked for every key-value pair by implementing a mapper. The Map function was used to process data splits, represented in the form of key-value pairs in parallel modes without any communication between other map processes. The Map function processed a series of key/value pairs and subsequently generated zero or more key/value pairs. Furthermore, the Map function also divided the input image into partitions before generating the secret key and XOR matrix. The secret key and XOR matrix were exploited to encrypt the image. The Reduce function merged the resultant images from the Map tasks in producing the final image. Furthermore, the value of PSNR did not exceed 7.61 when the developed IES was evaluated against known attacks for both the standard dataset and big data size images. As can be seen, the correlation coefficient value of the developed IES did not exceed 0.000732. As the handling of big data size images is different from that of standard data size images, findings of this study suggest that the developed IES could be most beneficial for big data and big size images

    Digital watermarking and novel security devices

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Performance comparison of intrusion detection systems and application of machine learning to Snort system

    Get PDF
    This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that the Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimised SVM with firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimised machine learning algorithms to Snort

    A Survey on Biometrics and Cancelable Biometrics Systems

    Get PDF
    Now-a-days, biometric systems have replaced the password or token based authentication system in many fields to improve the security level. However, biometric system is also vulnerable to security threats. Unlike password based system, biometric templates cannot be replaced if lost or compromised. To deal with the issue of the compromised biometric template, template protection schemes evolved to make it possible to replace the biometric template. Cancelable biometric is such a template protection scheme that replaces a biometric template when the stored template is stolen or lost. It is a feature domain transformation where a distorted version of a biometric template is generated and matched in the transformed domain. This paper presents a review on the state-of-the-art and analysis of different existing methods of biometric based authentication system and cancelable biometric systems along with an elaborate focus on cancelable biometrics in order to show its advantages over the standard biometric systems through some generalized standards and guidelines acquired from the literature. We also proposed a highly secure method for cancelable biometrics using a non-invertible function based on Discrete Cosine Transformation (DCT) and Huffman encoding. We tested and evaluated the proposed novel method for 50 users and achieved good results

    Sonic utopia and social dystopia in the music of Hendrix, Reznor and Deadmau5

    Get PDF
    Twentieth-century popular music is fundamentally associated with electronics in its creation and recording, consumption, modes of dissemination, and playback. Traditional musical analysis, placing primacy on notated music, generally focuses on harmony, melody, and form, with issues of timbre and postproduction effects remaining largely unstudied. Interdisciplinary methodological practices address these limitations and can help broaden the analytical scope of popular idioms. Grounded in Jacques Attali's critical theories about the political economy of music, this dissertation investigates how the subversive noise of electronic sound challenges a controlling order and predicts broad cultural realignment. This study demonstrates how electronic noise, as an extra-musical element, creates modern soundscapes that require a new mapping of musical form and social intent. I further argue that the use of electronics in popular music signifies a technologically-obsessed postwar American culture moving rapidly towards an online digital revolution. I examine how electronic music technology introduces new sounds concurrent with generational shifts, projects imagined utopian and dystopian futures, and engages the tension between automated modern life and emotionally validating musical communities in real and virtual spaces. Chapter One synthesizes this interdisciplinary American studies project with the growing scholarship of sound studies in order to construct theoretical models for popular music analysis drawn from the fields of musicology, history, and science and technology studies. Chapter Two traces the emergence of the electronic synthesizer as a new sound that facilitated the transition of a technological postwar American culture into the politicized counterculture of the 1960s. The following three chapters provide case studies of individual popular artists' use of electronic music technology to express societal and political discontent: 1) Jimi Hendrix's application of distortion and stereo effects to narrate an Afrofuturist consciousness in the 1960s; 2) Trent Reznor's aggressive industrial rejection of Conservatism in the 1980s; and 3) Deadmau5's mediation of online life through computer-based production and performance in the 2000s. Lastly, this study extends existing discussions within sound studies to consider the cultural implications of music technology, noise politics, electronic timbre, multitrack audio, digital analytical techniques and online communities built through social media

    Paintingphotogdigital: From Hybridity to Synthesis in the Age of Medium Equivalence

    Get PDF
    This thesis questions whether it is possible to synthesise material-based painting, photography, and digitally created and manipulated imagery in single artworks. To answer this question, a practice-based research of experiments explores physically conjoining painting with photography and the digital in picture-making. The investigation tests painting in its relationship with the other mediums, which adds to the current debates around painting’s position in contemporary art practices including “painting in the expanded field.” Painting has always had a contested relationship with photography, with the older discipline adopting the newer medium’s visual languages whilst freeing itself from the constraints of representation. For nearly two hundred years, painting’s repositioning in relation to photography has constantly redefined the traditional medium’s meaning and ensured its validity as a practice is re-asserted. As new data-based technologies expand into the aesthetic consciousness, painting also now locates itself against the digital to continue this self-renewal. However, whilst painters site their medium against either photography or the digital, there is little in the current art discourse that engages material-based painting with photography and the digital in direct combination as a means of further interrogating two-dimensional image-making. It is surprising that in a post-medium age, where artists undertake heterogeneous modes of art-making, such practice is under-explored. Conjoining material-based painting with photography and the digital in artworks provides a means of testing painting against new technologies; foregrounding painting in this conjunction adds to understandings of that medium’s role in a digitally media-saturated age. Initial practice of creating hybrid painted-on-photographs leads to the question of whether it is possible to synthesise these mediums in single pictures. This raises further questions as to how synthesis might be achieved, what attempting synthesis reveals about painting’s nature, and why attempting synthesis is important to the contemporary visual arts dialogue? To answer these questions, practical research attempts to conjoin the mediums visually, physically, and methodologically. Jerrold Levinson’s and Joseph Yasser’s theories of hybridity and synthesis of art forms conceptually inform the practical application of physically combining the mediums in two-dimensional artworks. Richard Wollheim’s theory of “seeing-in” paintings and Ernst H. Gombrich’s theory of differentiated viewing of pictures are drawn on to analyse hybridised and synthesised viewing experiences of the conjoined pictures. Concepts of erasure in art are employed to critically inform the deconstruction of hierarchical oppositions of the mediums, set within a dialectical materialist framework. Relevant contemporary art practices that investigate relationships between painting, photography, and the digital are surveyed to contextualise the practice. The research begins to fill the gap in practices and the literature around investigations into the relationship of the three mediums together, which contributes to understanding painting’s ontological nature in the digital age

    Nigerian modernism(s) 1900-1960 and the cultural ramifications of the found object in art

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThis thesis explored the phenomenon of Modernism in Twentieth Century Nigerian art and the cultural ramifications of the Found Object in European and African art. Adopting the analytical tools of postcolonial theory and Modernism, modern Nigerian art was subjected to stylistic, conceptual and contextual analysis. The avant-gardist context of the form was explored for two reasons; first in an attempt to distinguish the approaches of named artists and secondly, to address the Eurocentric exclusion of the ‘Other’ in Modernist discourse. The works of Nigerian modernists - Aina Onabolu, Ben Enwonwu and Uche Okeke whose practices flourished from 1900 - 1960, were interrogated and findings from detailed artists case studies proved that during the period of European Modernism, a parallel bifurcated Modernism (1900-1930 / 1930 -1960) occurred in Nigeria characterised by the interlacing of modern art with nationalist political advocacies to subvert colonialism, imperialism and European cultural imposition. This radical formulation of modern Nigerian art, constituted a unique parallel but distinct avant-gardism to Euro-American Modernism, thus proving that Modernism is a pluralistic phenomenon. To valorise the argument that Modernism had multiple avant-garde centres, this thesis analysed the variations in philosophies, ideologies and formalism of the works of Nigerian Modernists and contrasted them from Euro-American avant-gardes. The resultant cultural and contextual differences proved the plurality of Modernism not accounted for in Western art history. Furthermore, by adopting comparative analysis of the Found Object in European and African art, this thesis proved that, the appropriation of mundane objects in art differ from culture to culture, in context, philosophies and ramifications. This finding contributes to knowledge by addressing the ambiguity in Found Object art discourse and problematic attempts to subsume this genre into a mainstream framework. The uncovering/theorisation of this parallel bifurcated Nigerian Modernism, contributes to expanding understanding of Modernism as a pluralistic phenomenon thus, contributing to debates for the recognition of the different Modernisms which cultures outside Europe gave rise to. The recognition and situation of Nigerian avant-gardism and modernism and interpretation of the Found Object as being culturally specific will subsequently contribute to the reconstruction of modernist discourse and Nigerian/African art histories

    Propuesta de un método esteganográfico como soporte al proceso de seguridad de transferencia de imágenes.

    Get PDF
    En el trabajo de investigación se realizó el estudio de una propuesta de un método esteganográfico para mejorar el nivel de seguridad en la transferencia de imágenes, el método propuesto se basa tanto, en el método Esteganográfico Least Significant Bit (LSB) como en el método criptográfico de César en donde se ha logrado demostrar que al unificar los dos métodos se logra un nivel de seguridad confiable puesto que con el método criptográfico logramos encriptar el mensaje y a la vez lo ocultamos dentro de una imagen logrando así que pase desapercibido por el ojo del ser humano. Se ha desarrollado una aplicación web del método investigado con el lenguaje de programación Java Netbeans. Con la elaboración del prototipo se demuestra que aumenta el nivel de seguridad en un 80% con respecto a otros métodos esteganográficos en las transferencias de imágenes. Se recomienda fomentar la investigación sobre estos temas para incentivar sobre las medidas de seguridad que se debe tener para momento de enviar los mensajes dentro de las imágenes.In this research it was developed a study of a proposal of steganography method to improve the security level in the image transfer, the proposed method is based on steganography method Least Significant Bit (LSB) as well Cesar’s cryptographic method, where it has been demonstrated, by unifying both methods, it was achieved reliable security level because with cryptographic method it gets encrypted message and at the same time it is hidden into an image and it passed unnoticed to the human eye. It was developed a web application of the research method with Java Netbeans programming language. With the development of the prototype shows the security level increase about 80%. It is recommended to encourage research topics to motivate security measures that it should have at the moment of sending messages into the images

    Low-complexity, low-area computer architectures for cryptographic application in resource constrained environments

    Get PDF
    RCE (Resource Constrained Environment) is known for its stringent hardware design requirements. With the rise of Internet of Things (IoT), low-complexity and low-area designs are becoming prominent in the face of complex security threats. Two low-complexity, low-area cryptographic processors based on the ultimate reduced instruction set computer (URISC) are created to provide security features for wireless visual sensor networks (WVSN) by using field-programmable gate array (FPGA) based visual processors typically used in RCEs. The first processor is the Two Instruction Set Computer (TISC) running the Skipjack cipher. To improve security, a Compact Instruction Set Architecture (CISA) processor running the full AES with modified S-Box was created. The modified S-Box achieved a gate count reduction of 23% with no functional compromise compared to Boyar’s. Using the Spartan-3L XC3S1500L-4-FG320 FPGA, the implementation of the TISC occupies 71 slices and 1 block RAM. The TISC achieved a throughput of 46.38 kbps at a stable 24MHz clock. The CISA which occupies 157 slices and 1 block RAM, achieved a throughput of 119.3 kbps at a stable 24MHz clock. The CISA processor is demonstrated in two main applications, the first in a multilevel, multi cipher architecture (MMA) with two modes of operation, (1) by selecting cipher programs (primitives) and sharing crypto-blocks, (2) by using simple authentication, key renewal schemes, and showing perceptual improvements over direct AES on images. The second application demonstrates the use of the CISA processor as part of a selective encryption architecture (SEA) in combination with the millions instructions per second set partitioning in hierarchical trees (MIPS SPIHT) visual processor. The SEA is implemented on a Celoxica RC203 Vertex XC2V3000 FPGA occupying 6251 slices and a visual sensor is used to capture real world images. Four images frames were captured from a camera sensor, compressed, selectively encrypted, and sent over to a PC environment for decryption. The final design emulates a working visual sensor, from on node processing and encryption to back-end data processing on a server computer
    corecore