126 research outputs found

    Security for correlated sources across wiretap network

    Get PDF
    A thesis submitted in ful llment of the requirements for the degree of Doctor of Philosophy in the School of Electrical and Information Engineering Faculty of Engineering University of the Witwatersrand July 2015This thesis presents research conducted for the security aspects of correlated sources across a wiretap network. Correlated sources are present in communication systems where protocols ensure that there is some predetermined information for sources to transmit. Systems that contain correlated sources are for example broadcast channels, smart grid systems, wireless sensor networks and social media networks. In these systems there exist common information between the nodes in a network, which gives rise to security risks as common information can be determined about more than one source. In this work the security aspects of correlated sources are investigated. Correlated source coding in terms of the Slepian-Wolf theorem is investigated to determine the amount of information leakage for various correlated source models. The perfect secrecy approach developed by Shannon has also been incorporated as a security approach. In order to explore these security aspects the techniques employed range from typical sequences used to prove Slepian-Wolf's theorem to coding methods incorporating matrix partitions for correlated sources. A generalized correlated source model is presented and the procedure to determine the information leakage is initially illustrated using this model. A novel scenario for two correlated sources across a channel with eavesdroppers is also investigated. It is a basic model catering for the correlated source applications that have been detailed. The information leakage quanti cation is provided, where bounds specify the quantity of information leaked for various cases of eavesdropped channel information. The required transmission rates for perfect secrecy when some channel information has been wiretapped is further determined, followed by a method to reduce the key length required for perfect secrecy. The implementation thereafter provided shows how the information leakage is determined practically. In the same way using the information leakage quanti cation, Shannon's cipher system approach and practical implementation a novel two correlated source model where channel information and some source data symbols (predetermined information) are wiretapped is investigated. The adversary in this situation has access to more information than if a link is wiretapped only and can thus determine more about a particular source. This scenario caters for an application where the eavesdropper has access to some predetermined information. The security aspects and coding implementation have further been developed for a novel correlated source model with a heterogeneous encoding method. The model caters for situations where a wiretapper is able to easily access a particular source. iii The interesting link between information theory and coding theory is explored for the novel models presented in this research. A matrix partition method is utilized and the information leakage for various cases of wiretapped syndromes are presented. The research explores the security for correlated sources in the presence of wiretappers. Both the information leakage and Shannon's cipher system approach are used to achieve these security aspects. The implementation shows the practicality of using these security aspects in communications systems. The research contained herein is signi cant as evident from the various applications it may be used for and to the author's knowledge is novel

    Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

    Full text link
    This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.Comment: Further comments welcom

    Distributed secrecy for information theoretic sensor network models

    Get PDF
    This dissertation presents a novel problem inspired by the characteristics of sensor networks. The basic setup through-out the dissertation is that a set of sensor nodes encipher their data without collaboration and without any prior shared secret materials. The challenge is dealt by an eavesdropper who intercepts a subset of the enciphered data and wishes to gain knowledge of the uncoded data. This problem is challenging and novel given that the eavesdropper is assumed to know everything, including secret cryptographic keys used by both the encoders and decoders. We study the above problem using information theoretic models as a necessary first step towards an understanding of the characteristics of this system problem. This dissertation contains four parts. The first part deals with noiseless channels, and the goal is for sensor nodes to both source code and encipher their data. We derive inner and outer regions of the capacity region (i.e the set of all source coding and equivocation rates) for this problem under general distortion constraints. The main conclusion in this part is that unconditional secrecy is unachievable unless the distortion is maximal, rendering the data useless. In the second part we thus provide a practical coding scheme based on distributed source coding using syndromes (DISCUS) that provides secrecy beyond the equivocation measure, i.e. secrecy on each symbol in the message. The third part deals with discrete memoryless channels, and the goal is for sensor nodes to both channel code and encipher their data. We derive inner and outer regions to the secrecy capacity region, i.e. the set of all channel coding rates that achieve (weak) unconditional secrecy. The main conclusion in this part is that interference allows (weak) unconditional secrecy to be achieved in contrast with the first part of this dissertation. The fourth part deals with wireless channels with fading and additive Gaussian noise. We derive a general outer region and an inner region based on an equal SNR assumption, and show that the two are partially tight when the maximum available user powers are admissible

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Reconciliation for Satellite-Based Quantum Key Distribution

    Full text link
    This thesis reports on reconciliation schemes based on Low-Density Parity-Check (LDPC) codes in Quantum Key Distribution (QKD) protocols. It particularly focuses on a trade-off between the complexity of such reconciliation schemes and the QKD key growth, a trade-off that is critical to QKD system deployments. A key outcome of the thesis is a design of optimised schemes that maximise the QKD key growth based on finite-size keys for a range of QKD protocols. Beyond this design, the other four main contributions of the thesis are summarised as follows. First, I show that standardised short-length LDPC codes can be used for a special Discrete Variable QKD (DV-QKD) protocol and highlight the trade-off between the secret key throughput and the communication latency in space-based implementations. Second, I compare the decoding time and secret key rate performances between typical LDPC-based rate-adaptive and non-adaptive schemes for different channel conditions and show that the design of Mother codes for the rate-adaptive schemes is critical but remains an open question. Third, I demonstrate a novel design strategy that minimises the probability of the reconciliation process being the bottleneck of the overall DV-QKD system whilst achieving a target QKD rate (in bits per second) with a target ceiling on the failure probability with customised LDPC codes. Fourth, in the context of Continuous Variable QKD (CV-QKD), I construct an in-depth optimisation analysis taking both the security and the reconciliation complexity into account. The outcome of the last contribution leads to a reconciliation scheme delivering the highest secret key rate for a given processor speed which allows for the optimal solution to CV-QKD reconciliation

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    D10.3: Description of Internet Science Curriculum

    Get PDF
    This document presents a proposition for a reference Internet Science curriculum that can be adapted and implemented jointly or in collaboration, by different Universities. The construction of the curriculum represents a challenge and an opportunity for the NoE, as it represents the essence of Internet Science. What are the main aspects to be taught? What is the kernel? These questions are answered by the curriculum. The curriculum is a reference document and a guideline for the different universities wishing to implement it. It has to allow for adaptation to the heterogeneous national and institutional contexts. Nonetheless, our goal is to have the curriculum provide a definitive basis for a universally - recognised degree, considering the related constraints in order to ensure compatibility. In this way, the curriculum presented here is the root of a range of curricula; it may lead to a degree within an existing Departmental course, an autonomous an d dedicated degree or a component of new joint degrees. This document presents the process that lead s to the construction of the curriculum, followed by the main goal, the scientific content and issues related to possible implementation. The version presented here is a preliminary version. This is due to several reasons; most noticeable being that the choice of the implementation schema is currently under study (deliverable due for end of 2014) and it s input might influence the form or content of the curriculum. On the other hand, we will start collecting feedback, which will might as well trigger changes. The curricula in its current form it’s been subject to a communication at WebSci Education Workshop, held in conjunction with the Web Science 2014 Conference, in Bloomington, Indiana, June 2014. We had positive feedback during the conference from the web - science community. The 6 theme balanced structure was particularly appreciated

    A Physical Layer, Zero-round-trip-time, Multi-factor Authentication Protocol

    Get PDF
    Lightweight physical layer security schemes that have recently attracted a lot of attention include physical unclonable functions (PUFs), RF fingerprinting / proximity based authentication and secret key generation (SKG) from wireless fading coefficients. In this paper, we propose a fast, privacy-preserving, zero-round-trip-time (0-RTT), multi-factor authentication protocol, that for the first time brings all these elements together, i.e., PUFs, proximity estimation and SKG. We use Kalman filters to extract proximity estimates from real measurements of received signal strength (RSS) in an indoor environment to provide soft fingerprints for node authentication. By leveraging node mobility, a multitude of such fingerprints are extracted to provide resistance to impersonation type of attacks e.g., a false base station. Upon removal of the proximity fingerprints, the residual measurements are then used as an entropy source for the distillation of symmetric keys and subsequently used as resumption secrets in a 0-RTT fast authentication protocol. Both schemes are incorporated in a challenge-response PUF-based mutual authentication protocol, shown to be secure through formal proofs using Burrows, Abadi, and Needham (BAN) and Mao and Boyd (MB) logic, as well as the Tamarin-prover. Our protocol showcases that in future networks purely physical layer security solutions are tangible and can provide an alternative to public key infrastructure in specific scenarios
    • …
    corecore