21 research outputs found
New Design of Encryption with Covertext and Reordering
Documents for some entities are confidential and important, so security is required. Encryption with Covertext and Reordering (ECR) is a text-based document security model. ECR uses a random key to generate the ciphertext. The ECR’s random key is selected using the human-generated method. This research aims to increase the level of document security based on the ECR mechanism. This paper proposes a new method by using a random key in a permutated table. The random key is generated automatically by a function. The entropy is used as a measurement of the security level of the encrypted documents. The experimental shows that the permutated table inside the ECR provides better entropy values. It implies a better security level. The use of permutated tables also makes it easier to use ECR to secure documents
DESIGN AND IMPLEMENTATION OF A MATHEMATICAL ENCRYPTION MODEL FOR THE CENTRAL KURDISH FONT BASED ON UNICODE
This research focuses on the development of encryption algorithms for the Kurdish language, specifically tailored to the Kurdish alphabet. With the rapid growth and digital advancements in the Kurdistan Region of Iraq, there is a pressing need for accurate encryption methods that can be applied to Kurdish texts in administration and digital governance. To address this need, a mathematical encryption model is proposed, leveraging the Kurdish central font supported by Microsoft Windows to ensure compatibility between sender and receiver. The model utilizes the Unicode representation of Kurdish letters to calculate offset and mod values accurately. The effectiveness of the proposed model is validated through its implementation using the Caesar cipher method. Computation tasks are performed using Excel, while the encryption system is designed and programmed in C#. Extensive testing of the system with diverse key values demonstrates its high accuracy, achieving a high success rate in encrypting Kurdish texts. This research contributes significantly to the field of encryption for the Kurdish language, providing a scientific framework for further advancements in this area
An Efficient Unicode encoded in UTF-16 text cryptography based on the AES algorithm
Data security and secrecy from unwanted applications are the subjects of the science known as cryptography. The advanced encryption standard algorithm is the most used and secure algorithm to encrypt data. The AES algorithm is based on the symmetric algorithm and uses a single key to encrypt and decrypt data. The AES algorithm uses 128 bits length of plain text with 128 bits, 192 bits, or 256 bits key size to encrypt data. Latin script uses ASCII codes, and a single byte represents each alphabet. Therefore, in 128 bits AES encryption, 16 characters can be encrypted each time. The other language script used the Unicode standard to represent their alphabets. In Unicode, at least 2 bytes are required to represent a character. Therefore, eight characters can be encrypted at a time. Also, there is no available S-box for Unicode characters. Therefore, a modified algorithm is proposed for Unicode to encrypt data. To use the AES algorithm in Unicode data, we need to convert the Unicode into character encoding, such as UTF-16. Nevertheless, In UTF-16, some Unicode characters have similar recurrent values. This paper demonstrates a modified AES algorithm to encrypt the Unicode script to reduce time complexity
Suggesting new words to extract keywords from title and abstract
When talking about the fundamentals of writing research papers, we find that keywords are still present in most research papers, but that does not mean that they exist in all of them, we can find papers that do not contain keywords. Keywords are those words or phrases that accurately reflect the content of the research paper. Keywords are an exact abbreviation of what the research carries in its content. The right keywords may increase the chance of finding the article or research paper and chances of reaching more people who should reach them. The importance of keywords and the essence of the research and address is mainly to attract these highly specialized and highly influential writers in their fields and who specialize in reading what holds the appropriate characteristics but they do not read and cannot read everything. In this paper, we extract new keywords by suggesting a set of words, these words were suggested according to the many mentioned in the researches with multiple disciplines in the field of computer. In our system, we take a number of words (as many as specified in the program) that come before the proposed words and consider it as new keywords. This system proved to be effective in finding keywords that correspond to some extent with the keywords developed by the author in his research
Breaking the Binary: Exploring Orality in India through Typography, Cryptography and Craft
This thesis explores the fluidity of language through the fusion of heterogenous visual traditions — creating hybrid forms and coded communication. It voyages through the linguistic landscape of India. Delving into its social, political and cultural identities it finds expression in typography, cryptography, craft arts and poetry. From this body of work emerged the hybrid typeface Latinagari, a character set that fuses the letterforms of the Devanagari and Latin scripts.
Embodying both forms it is also neither. Being familiar to readers of either script, yet obscure. It both invites and denies the viewer’s desire to read. Representing the in-between spaces of spoken language in India, it is its own being.
Through it, a novel graphic language finds form and with it new opportunities for expression, communication and mis-communication. Like the way Hindi flows to English and back in contemporary Indian culture this visual vocabulary becomes its own thing, An expression of identity, a linguistic code and a way of knowing.Communication designCryptographyEmbroideryCraftCoded communicationMultilingual typographyDownload for optimal viewing experience
Data Hiding and Its Applications
Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others
Deep Transfer Learning for Automatic Speech Recognition: Towards Better Generalization
Automatic speech recognition (ASR) has recently become an important challenge
when using deep learning (DL). It requires large-scale training datasets and
high computational and storage resources. Moreover, DL techniques and machine
learning (ML) approaches in general, hypothesize that training and testing data
come from the same domain, with the same input feature space and data
distribution characteristics. This assumption, however, is not applicable in
some real-world artificial intelligence (AI) applications. Moreover, there are
situations where gathering real data is challenging, expensive, or rarely
occurring, which can not meet the data requirements of DL models. deep transfer
learning (DTL) has been introduced to overcome these issues, which helps
develop high-performing models using real datasets that are small or slightly
different but related to the training data. This paper presents a comprehensive
survey of DTL-based ASR frameworks to shed light on the latest developments and
helps academics and professionals understand current challenges. Specifically,
after presenting the DTL background, a well-designed taxonomy is adopted to
inform the state-of-the-art. A critical analysis is then conducted to identify
the limitations and advantages of each framework. Moving on, a comparative
study is introduced to highlight the current challenges before deriving
opportunities for future research
Optimized Cover Selection for Audio Steganography Using Multi-Objective Evolutionary Algorithm
Existing embedding techniques depend on cover audio selected by users. Unknowingly, users may make a poor cover audio selection that is not optimised in its capacity or imperceptibility features, which could reduce the effectiveness of any embedding technique. As a trade-off exists between capacity and imperceptibility, producing a method focused on optimising both features is crucial. One of the search methods commonly used to find solutions for the trade-off problem in various fields is the Multi-Objective Evolutionary Algorithm (MOEA). Therefore, this research proposed a new method for optimising cover audio selection for audio steganography using the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which falls under the MOEA Pareto dominance paradigm. The proposed method provided suggestions for cover audio to users based on imperceptibility and capacity features. The sample difference calculation was initially formulated to determine the maximum capacity for each cover audio defined in the cover audio database. Next, NSGA-II was implemented to determine the optimised solutions based on the parameters provided by each chromosome. The experimental results demonstrated the effectiveness of the proposed method as it managed to dominate the solutions from the previous method selected based on one criterion only. In addition, the proposed method considered that the trade-off managed to select the solution as the highest priority compared to the previous method, which put the same solution as low as 71 in the priority ranking. In conclusion, the method optimised the cover audio selected, thus, improving the effectiveness of the audio steganography used. It can be a response to help people whose computers and mobile devices continue to be unfamiliar with audio steganography in an age where information security is crucial