8 research outputs found

    Achievable secrecy enchancement through joint encryption and privacy amplification

    Get PDF
    In this dissertation we try to achieve secrecy enhancement in communications by resorting to both cryptographic and information theoretic secrecy tools and metrics. Our objective is to unify tools and measures from cryptography community with techniques and metrics from information theory community that are utilized to provide privacy and confidentiality in communication systems. For this purpose we adopt encryption techniques accompanied with privacy amplification tools in order to achieve secrecy goals that are determined based on information theoretic and cryptographic metrics. Every secrecy scheme relies on a certain advantage for legitimate users over adversaries viewed as an asymmetry in the system to deliver the required security for data transmission. In all of the proposed schemes in this dissertation, we resort to either inherently existing asymmetry in the system or proactively created advantage for legitimate users over a passive eavesdropper to further enhance secrecy of the communications. This advantage is manipulated by means of privacy amplification and encryption tools to achieve secrecy goals for the system evaluated based on information theoretic and cryptographic metrics. In our first work discussed in Chapter 2 and the third work explained in Chapter 4, we rely on a proactively established advantage for legitimate users based on eavesdropper’s lack of knowledge about a shared source of data. Unlike these works that assume an errorfree physical channel, in the second work discussed in Chapter 3 correlated erasure wiretap channel model is considered. This work relies on a passive and internally existing advantage for legitimate users that is built upon statistical and partial independence of eavesdropper’s channel errors from the errors in the main channel. We arrive at this secrecy advantage for legitimate users by exploitation of an authenticated but insecure feedback channel. From the perspective of the utilized tools, the first work discussed in Chapter 2 considers a specific scenario where secrecy enhancement of a particular block cipher called Data Encryption standard (DES) operating in cipher feedback mode (CFB) is studied. This secrecy enhancement is achieved by means of deliberate noise injection and wiretap channel encoding as a technique for privacy amplification against a resource constrained eavesdropper. Compared to the first work, the third work considers a more general framework in terms of both metrics and secrecy tools. This work studies secrecy enhancement of a general cipher based on universal hashing as a privacy amplification technique against an unbounded adversary. In this work, we have achieved the goal of exponential secrecy where information leakage to adversary, that is assessed in terms of mutual information as an information theoretic measure and Eve’s distinguishability as a cryptographic metric, decays at an exponential rate. In the second work generally encrypted data frames are transmitted through Automatic Repeat reQuest (ARQ) protocol to generate a common random source between legitimate users that later on is transformed into information theoretically secure keys for encryption by means of privacy amplification based on universal hashing. Towards the end, future works as an extension of the accomplished research in this dissertation are outlined. Proofs of major theorems and lemmas are presented in the Appendix

    Stay Connected, Leave no Trace: Enhancing Security and Privacy in WiFi via Obfuscating Radiometric Fingerprints

    Full text link
    The intrinsic hardware imperfection of WiFi chipsets manifests itself in the transmitted signal, leading to a unique radiometric fingerprint. This fingerprint can be used as an additional means of authentication to enhance security. In fact, recent works propose practical fingerprinting solutions that can be readily implemented in commercial-off-the-shelf devices. In this paper, we prove analytically and experimentally that these solutions are highly vulnerable to impersonation attacks. We also demonstrate that such a unique device-based signature can be abused to violate privacy by tracking the user device, and, as of today, users do not have any means to prevent such privacy attacks other than turning off the device. We propose RF-Veil, a radiometric fingerprinting solution that not only is robust against impersonation attacks but also protects user privacy by obfuscating the radiometric fingerprint of the transmitter for non-legitimate receivers. Specifically, we introduce a randomized pattern of phase errors to the transmitted signal such that only the intended receiver can extract the original fingerprint of the transmitter. In a series of experiments and analyses, we expose the vulnerability of adopting naive randomization to statistical attacks and introduce countermeasures. Finally, we show the efficacy of RF-Veil experimentally in protecting user privacy and enhancing security. More importantly, our proposed solution allows communicating with other devices, which do not employ RF-Veil.Comment: ACM Sigmetrics 2021 / In Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, 3, Article 44 (December 2020

    A Critical Review of Physical Layer Security in Wireless Networking

    Get PDF
    Wireless networking has kept evolving with additional features and increasing capacity. Meanwhile, inherent characteristics of wireless networking make it more vulnerable than wired networks. In this thesis we present an extensive and comprehensive review of physical layer security in wireless networking. Different from cryptography, physical layer security, emerging from the information theoretic assessment of secrecy, could leverage the properties of wireless channel for security purpose, by either enabling secret communication without the need of keys, or facilitating the key agreement process. Hence we categorize existing literature into two main branches, namely keyless security and key-based security. We elaborate the evolution of this area from the early theoretic works on the wiretap channel, to its generalizations to more complicated scenarios including multiple-user, multiple-access and multiple-antenna systems, and introduce not only theoretical results but practical implementations. We critically and systematically examine the existing knowledge by analyzing the fundamental mechanics for each approach. Hence we are able to highlight advantages and limitations of proposed techniques, as well their interrelations, and bring insights into future developments of this area

    Binarism and indeterminacy in the novels of Thomas Pynchor

    Get PDF
    Bibliography: pages 397-401.I attempt in this thesis, to graft together a close critical, and predominantly thematic, reading of Thomas Pynchon's novels with selected issues treated in the work of Jacques Derrida on philosophy and textuality, illustrating how this work demands the revision and interrogation of several major critical issues, concepts, dualisms and presuppositions. The thesis consists of an Introduction which sets forth a brief rationale for the graft described above, followed by a short and unavoidably inadequate synopsis of Derrida's work with a brief review and explication of those of his 'concepts' which play an important role in my reading of Pynchon's texts. The Introduction is succeeded by three lengthy chapters in which I discuss, more or less separately, each of Pynchon's three novels to date. These are V. (1963), The Crying of Lot 49, (1966) and Gravity's Rainbow (1973), and I discuss them in the order of their appearance, devoting a chapter to each. I attempt to treat different but related issues, preoccupations, themes and tropes in each of the novels to avoid repeating myself, engaging the apparatuses derived from Derrida's writing where deemed strategic and instructive. I suggest moreover, that several of the issues examined apropos the novel under consideration in any one chapter apply mutandis rnutandi to the other novels. Each chapter therefore to some extent conducts a reading of the novels which it does not treat directly. Finally, supervising these separate chapters is a sustained focus on the epistemology of binarism and digitalism, and the conceptual dualisms which structure and inform major portions of the thematic and rhetorical dimensions The thesis concludes with a Bibliography and a summary Epilogue which seeks to assess briefly the 'achievement' of Pynchon's writing

    John Barth's later fiction : intertextual readings, with emphasis on Letters (1979)

    Get PDF
    This dissertation consists of five chapters. Chapter I serves as an introduction to intertextuality; it focuses on John Barth's narrative crisis and discusses structuralist and poststructuralist theories of intertextuality. Chapters II, III and IV discuss the agencies of reader, author and text respectively. Chapter II looks at structuralist and poststructuralist notions of reading and John Barth's parodic play with these notions; it also provides an in-depth analysis of the external and internal readers of LETTERS. Chapter III concentrates on the roles of the reader as re-writer and the author as re-arranger and looks closely at the roles of the different narratorial agents in LETTERS. Chapter IV starts off with a discussion of the discourse of the copy in postmodern culture and moves, via poststructuralist and narrativisit mimesis, to different forms of repetition as developed by Soren Kierkegaard, Martin Heidegger and Jacques Derrida. Chapter V focuses on John Barth's rethinking of notions of authorship and authority. It first gives an historical introduction to authorship, starting off in the Middle Ages, and then moves, via eighteenth-century Samuel Richard, son and nineteenth-century Edgar Allan Poe and Soren Kierkegaard, to twentieth-century· notions of authorship as developed by Harold Bloom, Michel Foucault and Jonathan Culler,to end with Jacques Derrida's signature theory. Bibliography: p. 340-356

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Towards adaptive anomaly detection systems using boolean combination of hidden Markov models

    Get PDF
    Anomaly detection monitors for significant deviations from normal system behavior. Hidden Markov Models (HMMs) have been successfully applied in many intrusion detection applications, including anomaly detection from sequences of operating system calls. In practice, anomaly detection systems (ADSs) based on HMMs typically generate false alarms because they are designed using limited representative training data and prior knowledge. However, since new data may become available over time, an important feature of an ADS is the ability to accommodate newly-acquired data incrementally, after it has originally been trained and deployed for operations. Incremental re-estimation of HMM parameters raises several challenges. HMM parameters should be updated from new data without requiring access to the previously-learned training data, and without corrupting previously-learned models of normal behavior. Standard techniques for training HMM parameters involve iterative batch learning, and hence must observe the entire training data prior to updating HMM parameters. Given new training data, these techniques must restart the training procedure using all (new and previously-accumulated) data. Moreover, a single HMM system for incremental learning may not adequately approximate the underlying data distribution of the normal process, due to the many local maxima in the solution space. Ensemble methods have been shown to alleviate knowledge corruption, by combining the outputs of classifiers trained independently on successive blocks of data. This thesis makes contributions at the HMM and decision levels towards improved accuracy, efficiency and adaptability of HMM-based ADSs. It first presents a survey of techniques found in literature that may be suitable for incremental learning of HMM parameters, and assesses the challenges faced when these techniques are applied to incremental learning scenarios in which the new training data is limited and abundant. Consequently, An efficient alternative to the Forward-Backward algorithm is first proposed to reduce the memory complexity without increasing the computational overhead of HMM parameters estimation from fixed-size abundant data. Improved techniques for incremental learning of HMM parameters are then proposed to accommodate new data over time, while maintaining a high level of performance. However, knowledge corruption caused by a single HMM with a fixed number of states remains an issue. To overcome such limitations, this thesis presents an efficient system to accommodate new data using a learn-and-combine approach at the decision level. When a new block of training data becomes available, a new pool of base HMMs is generated from the data using a different number of HMM states and random initializations. The responses from the newly-trained HMMs are then combined to those of the previously-trained HMMs in receiver operating characteristic (ROC) space using novel Boolean combination (BC) techniques. The learn-and-combine approach allows to select a diversified ensemble of HMMs (EoHMMs) from the pool, and adapts the Boolean fusion functions and thresholds for improved performance, while it prunes redundant base HMMs. The proposed system is capable of changing its desired operating point during operations, and this point can be adjusted to changes in prior probabilities and costs of errors. During simulations conducted for incremental learning from successive data blocks using both synthetic and real-world system call data sets, the proposed learn-and-combine approach has been shown to achieve the highest level of accuracy than all related techniques. In particular, it can sustain a significantly higher level of accuracy than when the parameters of a single best HMM are re-estimated for each new block of data, using the reference batch learning and the proposed incremental learning techniques. It also outperforms static fusion techniques such as majority voting for combining the responses of new and previously-generated pools of HMMs. Ensemble selection techniques have been shown to form compact EoHMMs for operations, by selecting diverse and accurate base HMMs from the pool while maintaining or improving the overall system accuracy. Pruning has been shown to prevents pool sizes from increasing indefinitely with the number of data blocks acquired over time. Therefore, the storage space for accommodating HMMs parameters and the computational costs of the selection techniques are reduced, without negatively affecting the overall system performance. The proposed techniques are general in that they can be employed to adapt HMM-based systems to new data, within a wide range of application domains. More importantly, the proposed Boolean combination techniques can be employed to combine diverse responses from any set of crisp or soft one- or two-class classifiers trained on different data or features or trained according to different parameters, or from different detectors trained on the same data. In particular, they can be effectively applied when training data is limited and test data is imbalanced
    corecore