1,458 research outputs found

    THE DYNAMIC CIPHERS – NEW CONCEPT OF LONG-TERM CONTENT PROTECTING

    Get PDF
    In the paper the original concept of a new cipher, targeted at this moment forcivil applications in technology (e.g. measurement and control systems) and business (e.g.content protecting, knowledge-based companies or long-term archiving systems) is presented.The idea of the cipher is based on one-time pads and linear feedback shift registers. Therapidly changing hardware and software environment of cryptographic systems has beentaken into account during the construction of the cipher. The main idea of this work is tocreate a cryptosystem that can protect content or data for a long time, even more than onehundred years. The proposed algorithm can also simulate a stream cipher which makes itpossible to apply it in digital signal processing systems such as those within audio and videodelivery or telecommunication.Content protection, Cryptosystem, Dynamic cryptography, Linear Feedback ShiftRegisters, Object-oriented programming, One-time pad, Random key, random number generators,Statistical evaluation of ciphers.

    Enhancing Secure Sockets Layer Bulk Data Trnsfer Phase Performance With Parallel Cryptography Algorithm

    Get PDF
    With more than 2 billion people connected to the Internet, information security has become a top priority. Many applications such as electronic banking, medical database, and electronic commerce require the exchange of private information. Hashed Message Authentication Code (HMAC) is widely used to provide authenticity, while symmetric encryption algorithms provide confidentiality. Secure Socket Layer (SSL) is one of the most widely used security protocols on the Internet. In the current Bulk Data Transfer (BDT) phase in SSL, the server or the client firstly calculates the Message Authentication Code (MAC) of the data using HMAC operation, and then performs the symmetric encryption on the data together with the MAC. Despite steady improvements in SSL performance, BDT operation degrades CPU performance. This is due to the cryptography operations that include the HMAC and symmetric encryptions. The thesis proposes a new algorithm that provides a significant performance gain in bulk data transfer without compromising the security. The proposed algorithm performs the encryption of the data and the calculation of the MAC in parallel. The server calculates the MAC of the data the same time the encryption processes the data. Once the calculation of the MAC is completed, only then the MAC will be encrypted. The proposed algorithm was simulated using two processors with one performing the HMAC calculation and the other encrypting the data, simultaneously. Advanced Encryption Standard (AES) was chosen as encryption algorithm and HMAC Standard Hash Algorithm 1 (SHA1) was chosen as HMAC algorithm. The communication between the processors was done via Message Passing Interface (MPI). The existing sequential and the proposed parallel algorithms were simulated successfully while preserving security properties. Based on the performance simulations, the new parallel algorithm gained speedup of 1.74 with 85% efficiency over the current sequential algorithm. The parallel overheads that limit the maximum achievable speedup were also considered. Different block cipher modes were used in which the Cipher-Block Chaining (CBC) gives the best speedup among the feedback cipher modes. In addition, Triple Data Encryption Standard (3DES) was also simulated as the encryption algorithm to compare the speedup performance with AES encryption

    A Better Key Schedule for DES-like Ciphers

    Get PDF
    Several DES-like ciphers aren't utilizing their full potential strength, because of the short key and linear or otherwise easily tractable algorithms they use to generate their key schedules. Using DES as example, we show a way to generate round subkeys to increase the cipher strength substantially by making relations between the round subkeys practically intractable

    Large-Scale Evaluation of Topic Models and Dimensionality Reduction Methods for 2D Text Spatialization

    Full text link
    Topic models are a class of unsupervised learning algorithms for detecting the semantic structure within a text corpus. Together with a subsequent dimensionality reduction algorithm, topic models can be used for deriving spatializations for text corpora as two-dimensional scatter plots, reflecting semantic similarity between the documents and supporting corpus analysis. Although the choice of the topic model, the dimensionality reduction, and their underlying hyperparameters significantly impact the resulting layout, it is unknown which particular combinations result in high-quality layouts with respect to accuracy and perception metrics. To investigate the effectiveness of topic models and dimensionality reduction methods for the spatialization of corpora as two-dimensional scatter plots (or basis for landscape-type visualizations), we present a large-scale, benchmark-based computational evaluation. Our evaluation consists of (1) a set of corpora, (2) a set of layout algorithms that are combinations of topic models and dimensionality reductions, and (3) quality metrics for quantifying the resulting layout. The corpora are given as document-term matrices, and each document is assigned to a thematic class. The chosen metrics quantify the preservation of local and global properties and the perceptual effectiveness of the two-dimensional scatter plots. By evaluating the benchmark on a computing cluster, we derived a multivariate dataset with over 45 000 individual layouts and corresponding quality metrics. Based on the results, we propose guidelines for the effective design of text spatializations that are based on topic models and dimensionality reductions. As a main result, we show that interpretable topic models are beneficial for capturing the structure of text corpora. We furthermore recommend the use of t-SNE as a subsequent dimensionality reduction.Comment: To be published at IEEE VIS 2023 conferenc

    Software Obfuscation with Symmetric Cryptography

    Get PDF
    Software protection is of great interest to commercial industry. Millions of dollars and years of research are invested in the development of proprietary algorithms used in software programs. A reverse engineer that successfully reverses another company‘s proprietary algorithms can develop a competing product to market in less time and with less money. The threat is even greater in military applications where adversarial reversers can use reverse engineering on unprotected military software to compromise capabilities on the field or develop their own capabilities with significantly less resources. Thus, it is vital to protect software, especially the software’s sensitive internal algorithms, from adversarial analysis. Software protection through obfuscation is a relatively new research initiative. The mathematical and security community have yet to agree upon a model to describe the problem let alone the metrics used to evaluate the practical solutions proposed by computer scientists. We propose evaluating solutions to obfuscation under the intent protection model, a combination of white-box and black-box protection to reflect how reverse engineers analyze programs using a combination white-box and black-box attacks. In addition, we explore use of experimental methods and metrics in analogous and more mature fields of study such as hardware circuits and cryptography. Finally, we implement a solution under the intent protection model that demonstrates application of the methods and evaluation using the metrics adapted from the aforementioned fields of study to reflect the unique challenges in a software-only software protection technique
    corecore