1,115 research outputs found
Semantic Security and Indistinguishability in the Quantum World
At CRYPTO 2013, Boneh and Zhandry initiated the study of quantum-secure
encryption. They proposed first indistinguishability definitions for the
quantum world where the actual indistinguishability only holds for classical
messages, and they provide arguments why it might be hard to achieve a stronger
notion. In this work, we show that stronger notions are achievable, where the
indistinguishability holds for quantum superpositions of messages. We
investigate exhaustively the possibilities and subtle differences in defining
such a quantum indistinguishability notion for symmetric-key encryption
schemes. We justify our stronger definition by showing its equivalence to novel
quantum semantic-security notions that we introduce. Furthermore, we show that
our new security definitions cannot be achieved by a large class of ciphers --
those which are quasi-preserving the message length. On the other hand, we
provide a secure construction based on quantum-resistant pseudorandom
permutations; this construction can be used as a generic transformation for
turning a large class of encryption schemes into quantum indistinguishable and
hence quantum semantically secure ones. Moreover, our construction is the first
completely classical encryption scheme shown to be secure against an even
stronger notion of indistinguishability, which was previously known to be
achievable only by using quantum messages and arbitrary quantum encryption
circuits.Comment: 37 pages, 2 figure
Novel Framework for Hidden Data in the Image Page within Executable File Using Computation between Advanced Encryption Standard and Distortion Techniques
The hurried development of multimedia and internet allows for wide
distribution of digital media data. It becomes much easier to edit, modify and
duplicate digital information. In additional, digital document is also easy to
copy and distribute, therefore it may face many threats. It became necessary to
find an appropriate protection due to the significance, accuracy and
sensitivity of the information. Furthermore, there is no formal method to be
followed to discover a hidden data. In this paper, a new information hiding
framework is presented.The proposed framework aim is implementation of
framework computation between advance encryption standard (AES) and distortion
technique (DT) which embeds information in image page within executable file
(EXE file) to find a secure solution to cover file without change the size of
cover file. The framework includes two main functions; first is the hiding of
the information in the image page of EXE file, through the execution of four
process (specify the cover file, specify the information file, encryption of
the information, and hiding the information) and the second function is the
extraction of the hiding information through three process (specify the stego
file, extract the information, and decryption of the information).Comment: 6 Pages IEEE Format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.42
Cryptanalysis of Masked Ciphers: A not so Random Idea
A new approach to the security analysis of hardware-oriented masked ciphers against second-order side-channel attacks is developed. By relying on techniques from symmetric-key cryptanalysis, concrete security bounds are obtained in a variant of the probing model that allows the adversary to make only a bounded, but possibly very large, number of measurements. Specifically, it is formally shown how a bounded-query variant of robust probing security can be reduced to the linear cryptanalysis of masked ciphers.
As a result, the compositional issues of higher-order threshold implementations can be overcome without relying on fresh randomness. From a practical point of view, the aforementioned approach makes it possible to transfer many of the desirable properties of first-order threshold implementations, such as their low randomness usage, to the second-order setting. For example, a straightforward application to the block cipher LED results in a masking using less than 700 random bits including the initial sharing. In addition, the cryptanalytic approach introduced in this paper provides additional insight into the design of masked ciphers and allows for a quantifiable trade-off between security and performance
An Overview of Parallel Symmetric Cipher of Messages
مقدمة:
على الرغم من التطورات الهامة في الاتصالات والتكنولوجيا، فقد أثبتت حماية البيانات نفسها كواحدة من أكبر الاهتمامات. يجب تشفير البيانات من أجل الارتباط بشكل آمن وسريع من خلال نقل البيانات التكنولوجية على شبكة الإنترنت. يمكن تعريف عملية التشفير بانها تحويل النص العادي إلى نص مشفر لا يمكن قراءته أو تغييره بواسطة الأشخاص المؤذيين.
طرق العمل:
من أجل الحفاظ على الدرجة المطلوبة من الأمان ، استغرقت كل من عمليات تحليل التشفير وفك التشفير وقتًا طويلاً. ومع ذلك, من أجل تقليل مقدار الوقت المطلوب لإكمال عمليات التشفير وفك التشفير، طبق العديد من الباحثين طريقة التشفير بطريقة موازية. لقد كشف البحث الذي تم إجراؤه حول المشكلة عن العديد من الإجابات المحتملة. استخدم الباحثون التوازي لتحسين إنتاجية خوارزمياتهم، مما سمح لهم بتحقيق مستويات أداء أعلى في خوارزمية التشفير.
النتائج:
أظهرت الأبحاث الحديثة حول تقنيات التشفير المتوازي أن وحدات معالجة الرسومات (GPUs) تعمل بشكل أفضل من الأنظمة الأساسية المتوازية الأخرى عند مقارنة مستويات أداء التشفير.
الاستنتاجات:
لإجراء بحث مقارنة حول أهم خوارزميات التشفير المتوازية من حيث فعالية أمن البيانات وطول المفتاح والتكلفة والسرعة، من بين أمور أخرى. تستعرض هذه الورقة العديد من الخوارزميات المتوازية الهامة المستخدمة في تشفير البيانات وفك تشفيرها في جميع التخصصات. ومع ذلك، يجب النظر في معايير أخرى لإظهار مصداقية أي تشفير. تعتبر اختبارات العشوائية مهمة جدًا لاكتشافها وتم تسليط الضوء عليها في هذه الدراسة. Background:
Despite significant developments in communications and technology, data protection has established itself as one of the biggest concerns. The data must be encrypted in order to link securely, quickly through web-based technological data transmission. Transforming plain text into ciphered text that cannot be read or changed by malicious people is the process of encryption.
Materials and Methods:
In order to maintain the required degree of security, both the cryptanalysis and decryption operations took a significant amount of time. However, in order to cut down on the amount of time required for the encryption and decryption operations to be completed, several researchers implemented the cryptography method in a parallel fashion. The research that has been done on the problem has uncovered several potential answers. Researchers used parallelism to improve the throughput of their algorithms, which allowed them to achieve higher performance levels on the encryption algorithm.
Results:
Recent research on parallel encryption techniques has shown that graphics processing units (GPUs) perform better than other parallel platforms when comparing their levels of encryption performance.
Conclusion:
To carry out comparison research on the most significant parallel crypto algorithms in terms of data security efficacy, key length, cost, and speed, among other things. This paper reviews various significant parallel algorithms used for data encryption and decryption in all disciplines. However, other criteria must be considered in order to show the trustworthiness of any encryption. Randomness tests are very important to discover and are highlighted in this study
Quantum Noise Randomized Ciphers
We review the notion of a classical random cipher and its advantages. We
sharpen the usual description of random ciphers to a particular mathematical
characterization suggested by the salient feature responsible for their
increased security. We describe a concrete system known as AlphaEta and show
that it is equivalent to a random cipher in which the required randomization is
effected by coherent-state quantum noise. We describe the currently known
security features of AlphaEta and similar systems, including lower bounds on
the unicity distances against ciphertext-only and known-plaintext attacks. We
show how AlphaEta used in conjunction with any standard stream cipher such as
AES (Advanced Encryption Standard) provides an additional, qualitatively
different layer of security from physical encryption against known-plaintext
attacks on the key. We refute some claims in the literature that AlphaEta is
equivalent to a non-random stream cipher.Comment: Accepted for publication in Phys. Rev. A; Discussion augmented and
re-organized; Section 5 contains a detailed response to 'T. Nishioka, T.
Hasegawa, H. Ishizuka, K. Imafuku, H. Imai: Phys. Lett. A 327 (2004) 28-32
/quant-ph/0310168' & 'T. Nishioka, T. Hasegawa, H. Ishizuka, K. Imafuku, H.
Imai: Phys. Lett. A 346 (2005) 7
An enhanced Blowfish Algorithm based on cylindrical coordinate system and dynamic permutation box
The Blowfish Algorithm (BA) is a symmetric block cipher that uses Feistel network to iterate simple encryption and decryption functions. BA key varies from 32 to 448 bits to ensure a high level of security. However, the substitution box (S-Box) in BA occupies a high percentage of memory and has problems in security, specifically in randomness of output with text and image files that have large strings of identical bytes. Thus, the objective of this research is to enhance the BA to overcome these problems. The research involved three phases, algorithm design, implementation,
and evaluation. In the design phase, a dynamic 3D S-Box, a dynamic permutation box (P-Box), and a Feistal Function (F-Function) were improved. The improvement involved integrating Cylindrical Coordinate System (CCS) and dynamic P-Box. The enhanced BA is known as Ramlan Ashwak Faudziah (RAF) algorithm. The
implementation phase involved performing key expansion, data encryption, and data decryption. The evaluation phase involved measuring the algorithm in terms of memory and security. In terms of memory, the results showed that the RAF occupied 256 bytes, which is less than the BA (4096 bytes). In terms of randomness of text and image files that have large strings of identical bytes, the average rate of randomness for 188 statistical tests obtained values of more than 96%. This means
that the RAF has high randomness indicating that it is more secured. Thus, the results showed that the RAF algorithm that integrates the CCS and dynamic P-Box serves as an effective approach that can consume less memory and strengthen security
Deterministic Chaos in Digital Cryptography
This thesis studies the application of deterministic chaos to digital
cryptography. Cryptographic systems such as pseudo-random generators
(PRNG), block ciphers and hash functions are regarded as a dynamic
system (X, j), where X is a state space (Le. message space)
and f : X -+ X is an iterated function. In both chaos theory and
cryptography, the object of study is a dynamic system that performs
an iterative nonlinear transformation of information in an apparently
unpredictable but deterministic manner. In terms of chaos theory, the
sensitivity to the initial conditions together with the mixing property
ensures cryptographic confusion (statistical independence) and diffusion
(uniform propagation of plaintext and key randomness into cihertext).
This synergetic relationship between the properties of chaotic and
cryptographic systems is considered at both the theoretical and practical
levels: The theoretical background upon which this relationship is
based, includes discussions on chaos, ergodicity, complexity, randomness,
unpredictability and entropy.
Two approaches to the finite-state implementation of chaotic systems
(Le. pseudo-chaos) are considered: (i) floating-point approximation of
continuous-state chaos; (ii) binary pseudo-chaos. An overview is given
of chaotic systems underpinning cryptographic algorithms along with
their strengths and weaknesses. Though all conventional cryposystems
are considered binary pseudo-chaos, neither chaos, nor pseudo-chaos are
sufficient to guarantee cryptographic strength and security.
A dynamic system is said to have an analytical solution Xn = (xo)
if any trajectory point Xn can be computed directly from the initial
conditions Xo, without performing n iterations. A chaotic system with an
analytical solution may have a unpredictable multi-valued map Xn+l =
f(xn). Their floating-point approximation is studied in the context of
pseudo-random generators.
A cryptographic software system E-Larm ™ implementing a multistream
pseudo-chaotic generator is described. Several pseudo-chaotic
systems including the logistic map, sine map, tangent- and logarithm feedback
maps, sawteeth and tent maps are evaluated by means of floating point
computations. Two types of partitioning are used to extract
pseudo-random from the floating-point state variable: (i) combining the
last significant bits of the floating-point number (for nonlinear maps);
and (ii) threshold partitioning (for piecewise linear maps). Multi-round
iterations are produced to decrease the bit dependence and increase non-linearity.
Relationships between pseudo-chaotic systems are introduced
to avoid short cycles (each system influences periodically the states of
other systems used in the encryption session).
An evaluation of cryptographic properties of E-Larm is given using
graphical plots such as state distributions, phase-space portraits, spectral
density Fourier transform, approximated entropy (APEN), cycle length
histogram, as well as a variety of statistical tests from the National Institute
of Standards and Technology (NIST) suite. Though E-Larm passes
all tests recommended by NIST, an approach based on the floating-point
approximation of chaos is inefficient in terms of the quality/performance
ratio (compared with existing PRNG algorithms). Also no solution is
known to control short cycles.
In conclusion, the role of chaos theory in cryptography is identified;
disadvantages of floating-point pseudo-chaos are emphasized although
binary pseudo-chaos is considered useful for cryptographic applications.Durand Technology Limite
- …