16 research outputs found

    RC4 Encryption-A Literature Survey

    Get PDF
    AbstractA chronological survey demonstrating the cryptanalysis of RC4 stream cipher is presented in this paper. We have summarized the various weaknesses of RC4 algorithm followed by the recently proposed enhancements available in the literature. It is established that innovative research efforts are required to develop secure RC4 algorithm, which can remove the weaknesses of RC4, such as biased bytes, key collisions, and key recovery attacks on WPA. These flaws in RC4 are still offering an open challenge for developers. Hence our chronological survey corroborates the fact that even though researchers are working on RC4 stream cipher since last two decades, it still offers a plethora of research issues. The attraction of community towards RC4 is still alive

    Glimpses are Forever in RC4 amidst the Spectre of Biases

    Get PDF
    In this paper we exploit elementary combinatorial techniques to settle different cryptanalytic observations on RC4 that remained unproved for more than two decades. At the same time, we present new observations with theoretical proofs. We first prove the biases (non-randomness) presented by Fluhrer and McGrew (FSE 2000) two decades ago. It is surprising that though the biases have been published long back, and there are many applications of them in cryptanalysis till recent days as well, the proofs have never been presented. In this paper, we complete that task and also show that any such bias immediately provides a glimpse of hidden variables in RC4. Further, we take up the biases of two non-consecutive key-stream bytes skipping one byte in between. We show the incompleteness of such a result presented by SenGupta et al (JoC, 2013) and provide new observations and proofs in this direction relating the key-stream bytes and glimpses. Similarly, we streamline certain missed observation in the famous Glimpse theorem presented by Jenkins in 1996. Our results point out how biases of RC4 key-stream and the Glimpses of the RC4 hidden variables are related. It is evident from our results that the biases and glimpses are everywhere in RC4 and it needs further investigation as we provide very high magnitude of glimpses that were not known earlier. The new glimpses and biases that we identify in this paper may be exploited in improving practical attacks against the protocols that use RC4

    The Index j in RC4 is not Pseudo-random due to Non-existence of Finney Cycle

    Get PDF
    In this very short note we prove that the pseudo-random index j of RC4 is indeed not pseudo-random. This is a simple result that missed our attention for quite a long time. We show that in long term Pr(j = i+1) = 1/N - 1/N^2, instead of the random association 1/N and this happens for the non-existence of the condition S[i] = 1 and j = i+1 that is mandatory for the non-existence of the Finney cycle

    Fast Internet-Wide Scanning: A New Security Perspective

    Full text link
    Techniques like passive observation and random sampling let researchers understand many aspects of Internet day-to-day operation, yet these methodologies often focus on popular services or a small demographic of users, rather than providing a comprehensive view of the devices and services that constitute the Internet. As the diversity of devices and the role they play in critical infrastructure increases, so does understanding the dynamics of and securing these hosts. This dissertation shows how fast Internet-wide scanning provides a near-global perspective of edge hosts that enables researchers to uncover security weaknesses that only emerge at scale. First, I show that it is possible to efficiently scan the IPv4 address space. ZMap: a network scanner specifically architected for large-scale research studies can survey the entire IPv4 address space from a single machine in under an hour at 97% of the theoretical maximum speed of gigabit Ethernet with an estimated 98% coverage of publicly available hosts. Building on ZMap, I introduce Censys, a public service that maintains up-to-date and legacy snapshots of the hosts and services running across the public IPv4 address space. Censys enables researchers to efficiently ask a range of security questions. Next, I present four case studies that highlight how Internet-wide scanning can identify new classes of weaknesses that only emerge at scale, uncover unexpected attacks, shed light on previously opaque distributed systems on the Internet, and understand the impact of consequential vulnerabilities. Finally, I explore how in- creased contention over IPv4 addresses introduces new challenges for performing large-scale empirical studies. I conclude with suggested directions that the re- search community needs to consider to retain the degree of visibility that Internet-wide scanning currently provides.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138660/1/zakir_1.pd

    Security of Ubiquitous Computing Systems

    Get PDF
    The chapters in this open access book arise out of the EU Cost Action project Cryptacus, the objective of which was to improve and adapt existent cryptanalysis methodologies and tools to the ubiquitous computing framework. The cryptanalysis implemented lies along four axes: cryptographic models, cryptanalysis of building blocks, hardware and software security engineering, and security assessment of real-world systems. The authors are top-class researchers in security and cryptography, and the contributions are of value to researchers and practitioners in these domains. This book is open access under a CC BY license

    Not invented here: Power and politics in public key infrastructure (PKI) institutionalisation at two global organisations.

    Get PDF
    This dissertation explores the impact of power and politics in Public Key Infrastructure (PKI) institutionalisation. We argue that this process can be understood in power and politics terms because the infrastructure skews the control of organisational action in favour of dominant individuals and groups. Indeed, as our case studies show, shifting power balances is not only a desired outcome of PKI deployment, power drives institutionalisation. Therefore, despite the rational goals of improving security and reducing the total cost of ownership for IT, the PKIs in our field organisations have actually been catalysts for power and politics. Although current research focuses on external technical interoperation, we believe emphasis should be on the interaction between the at once restrictive and flexible PKI technical features, organisational structures, goals of sponsors and potential user resistance. We use the Circuits of Power (CoP) framework to explain how a PKI conditions and is conditioned by power and politics. Drawing on the concepts of infrastructure and institution, we submit that PKIs are politically explosive in pluralistic, distributed global organisations because by limiting freedom of action in favour of stability and security, they set a stage for disaffection. The result of antipathy towards the infrastructure would not be a major concern if public key cryptography, which underpins PKI, had a centralised mechanism for enforcing the user discipline it relies on to work properly. However, since this discipline is not automatic, a PKI bereft of support from existing power arrangements faces considerable institutionalisation challenges. We assess these ideas in two case studies in London and Switzerland. In London, we explain how an oil company used its institutional structures to implement PKI as part of a desktop standard covering 105,000 employees. In Zurich and London, we give a power analysis of attempts by a global financial services firm to roll out PKI to over 70,000 users. Our dissertation makes an important contribution by showing that where PKI supporters engage in a shrewdly orchestrated campaign to knit the infrastructure with the existing institutional order, it becomes an accepted part of organisational life without much ceremony. In sum, we both fill gaps in information security literature and extend knowledge on the efficacy of the Circuits of Power framework in conducting IS institutionalisation studies

    Modeling Advanced Security Aspects of Key Exchange and Secure Channel Protocols

    Get PDF
    Secure communication has become an essential ingredient of our daily life. Mostly unnoticed, cryptography is protecting our interactions today when we read emails or do banking over the Internet, withdraw cash at an ATM, or chat with friends on our smartphone. Security in such communication is enabled through two components. First, two parties that wish to communicate securely engage in a key exchange protocol in order to establish a shared secret key known only to them. The established key is then used in a follow-up secure channel protocol in order to protect the actual data communicated against eavesdropping or malicious modification on the way. In modern cryptography, security is formalized through abstract mathematical security models which describe the considered class of attacks a cryptographic system is supposed to withstand. Such models enable formal reasoning that no attacker can, in reasonable time, break the security of a system assuming the security of its underlying building blocks or that certain mathematical problems are hard to solve. Given that the assumptions made are valid, security proofs in that sense hence rule out a certain class of attackers with well-defined capabilities. In order for such results to be meaningful for the actually deployed cryptographic systems, it is of utmost importance that security models capture the system's behavior and threats faced in that 'real world' as accurately as possible, yet not be overly demanding in order to still allow for efficient constructions. If a security model fails to capture a realistic attack in practice, such an attack remains viable on a cryptographic system despite a proof of security in that model, at worst voiding the system's overall practical security. In this thesis, we reconsider the established security models for key exchange and secure channel protocols. To this end, we study novel and advanced security aspects that have been introduced in recent designs of some of the most important security protocols deployed, or that escaped a formal treatment so far. We introduce enhanced security models in order to capture these advanced aspects and apply them to analyze the security of major practical key exchange and secure channel protocols, either directly or through comparatively close generic protocol designs. Key exchange protocols have so far always been understood as establishing a single secret key, and then terminating their operation. This changed in recent practical designs, specifically of Google's QUIC ("Quick UDP Internet Connections") protocol and the upcoming version 1.3 of the Transport Layer Security (TLS) protocol, the latter being the de-facto standard for security protocols. Both protocols derive multiple keys in what we formalize in this thesis as a multi-stage key exchange (MSKE) protocol, with the derived keys potentially depending on each other and differing in cryptographic strength. Our MSKE security model allows us to capture such dependencies and differences between all keys established in a single framework. In this thesis, we apply our model to assess the security of both the QUIC and the TLS 1.3 key exchange design. For QUIC, we are able to confirm the intended overall security but at the same time highlight an undesirable dependency between the two keys QUIC derives. For TLS 1.3, we begin by analyzing the main key exchange mode as well as a reduced resumption mode. Our analysis attests that TLS 1.3 achieves strong security for all keys derived without undesired dependencies, in particular confirming several of this new TLS version's design goals. We then also compare the QUIC and TLS 1.3 designs with respect to a novel 'zero round-trip time' key exchange mode establishing an initial key with minimal latency, studying how differences in these designs affect the achievable key exchange security. As this thesis' last contribution in the realm of key exchange, we formalize the notion of key confirmation which ensures one party in a key exchange execution that the other party indeed holds the same key. Despite being frequently mentioned in practical protocol specifications, key confirmation was never comprehensively treated so far. In particular, our formalization exposes an inherent, slight difference in the confirmation guarantees both communication partners can obtain and enables us to analyze the key confirmation properties of TLS 1.3. Secure channels have so far been modeled as protecting a sequence of distinct messages using a single secret key. Our first contribution in the realm of channels originates from the observation that, in practice, secure channel protocols like TLS actually do not allow an application to transmit distinct, or atomic, messages. Instead, they provide applications with a streaming interface to transmit a stream of bits without any inherent demarcation of individual messages. Necessarily, the security guarantees of such an interface differ significantly from those considered in cryptographic models so far. In particular, messages may be fragmented in transport, and the recipient may obtain the sent stream in a different fragmentation, which has in the past led to confusion and practical attacks on major application protocol implementations. In this thesis, we formalize such stream-based channels and introduce corresponding security notions of confidentiality and integrity capturing the inherently increased complexity. We then present a generic construction of a stream-based channel based on authenticated encryption with associated data (AEAD) that achieves the strongest security notions in our model and serves as validation of the similar TLS channel design. We also study the security of such applications whose messages are inherently atomic and which need to safely transport these messages over a streaming, i.e., possibly fragmenting, channel. Formalizing the desired security properties in terms of confidentiality and integrity in such a setting, we investigate and confirm the security of the widely adopted approach to encode the application's messages into the continuous data stream. Finally, we study a novel paradigm employed in the TLS 1.3 channel design, namely to update the keys used to secure a channel during that channel's lifetime in order to strengthen its security. We propose and formalize the notion of multi-key channels deploying such sequences of keys and capture their advanced security properties in a hierarchical framework of confidentiality and integrity notions. We show that our hierarchy of notions naturally connects to the established notions for single-key channels and instantiate its strongest security notions with a generic AEAD-based construction. Being comparatively close to the TLS 1.3 channel protocol, our construction furthermore enables a comparative design discussion

    Efficient and secured wireless monitoring systems for detection of cardiovascular diseases

    Get PDF
    Cardiovascular Disease (CVD) is the number one killer for modern era. Majority of the deaths associated with CVD can entirely be prevented if the CVD struck person is treated with urgency. This thesis is our effort in minimizing the delay associated with existing tele-cardiology application. We harnessed the computational power of modern day mobile phones to detect abnormality in Electrocardiogram (ECG). If abnormality is detected, our innovative ECG compression algorithm running on the patient's mobile phone compresses and encrypts the ECG signal and then performs efficient transmission towards the doctors or hospital services. According to the literature, we have achieved the highest possible compression ratio of 20.06 (95% compression) on ECG signal, without any loss of information. Our 3 layer permutation cipher based ECG encoding mechanism can raise the security strength substantially higher than conventional AES or DES algorithms. If in near future, a grid of supercomputers can compare a trillion trillion trillion (1036) combinations of one ECG segment (comprising 500 ECG samples) per second for ECG morphology matching, it will take approximately 9.333 X 10970 years to enumerate all the combinations. After receiving the compressed ECG packets the doctor's mobile phone or the hospital server authenticates the patient using our proposed set of ECG biometric based authentication mechanisms. Once authenticated, the patients are diagnosed with our faster ECG diagnosis algorithms. In a nutshell, this thesis contains a set of algorithms that can save a CVD affected patient's life by harnessing the power of mobile computation and wireless communication
    corecore