4,818 research outputs found

    Composability in quantum cryptography

    Full text link
    In this article, we review several aspects of composability in the context of quantum cryptography. The first part is devoted to key distribution. We discuss the security criteria that a quantum key distribution protocol must fulfill to allow its safe use within a larger security application (e.g., for secure message transmission). To illustrate the practical use of composability, we show how to generate a continuous key stream by sequentially composing rounds of a quantum key distribution protocol. In a second part, we take a more general point of view, which is necessary for the study of cryptographic situations involving, for example, mutually distrustful parties. We explain the universal composability framework and state the composition theorem which guarantees that secure protocols can securely be composed to larger applicationsComment: 18 pages, 2 figure

    Simulatable security for quantum protocols

    Full text link
    The notion of simulatable security (reactive simulatability, universal composability) is a powerful tool for allowing the modular design of cryptographic protocols (composition of protocols) and showing the security of a given protocol embedded in a larger one. Recently, these methods have received much attention in the quantum cryptographic community. We give a short introduction to simulatable security in general and proceed by sketching the many different definitional choices together with their advantages and disadvantages. Based on the reactive simulatability modelling of Backes, Pfitzmann and Waidner we then develop a quantum security model. By following the BPW modelling as closely as possible, we show that composable quantum security definitions for quantum protocols can strongly profit from their classical counterparts, since most of the definitional choices in the modelling are independent of the underlying machine model. In particular, we give a proof for the simple composition theorem in our framework.Comment: Added proof of combination lemma; added comparison to the model of Ben-Or, Mayers; minor correction

    Security Improvements for the S-MIM Asynchronous Return Link

    Get PDF
    S-MIM is a hybrid terrestrial and satellite system that enables efficient and high-performance communication in the return link. For communication to be possible between a device and the satellite, a preamble has to be established. Some parameters to generate the preamble are broadcasted by the satellite without protection. It is very important to protect the preamble, because if an attacker knows the preamble he could avoid the communication. This project presents a method without the necessity of establishing the preamble in a way that ensures the communication. However, to achieve this security the trade-off is degradation of throughput and a delay in communication

    A Review of the Energy Efficient and Secure Multicast Routing Protocols for Mobile Ad hoc Networks

    Full text link
    This paper presents a thorough survey of recent work addressing energy efficient multicast routing protocols and secure multicast routing protocols in Mobile Ad hoc Networks (MANETs). There are so many issues and solutions which witness the need of energy management and security in ad hoc wireless networks. The objective of a multicast routing protocol for MANETs is to support the propagation of data from a sender to all the receivers of a multicast group while trying to use the available bandwidth efficiently in the presence of frequent topology changes. Multicasting can improve the efficiency of the wireless link when sending multiple copies of messages by exploiting the inherent broadcast property of wireless transmission. Secure multicast routing plays a significant role in MANETs. However, offering energy efficient and secure multicast routing is a difficult and challenging task. In recent years, various multicast routing protocols have been proposed for MANETs. These protocols have distinguishing features and use different mechanismsComment: 15 page

    Distributed Control Methods for Integrating Renewable Generations and ICT Systems

    Get PDF
    With increased energy demand and decreased fossil fuels usages, the penetration of distributed generators (DGs) attracts more and more attention. Currently centralized control approaches can no longer meet real-time requirements for future power system. A proper decentralized control strategy needs to be proposed in order to enhance system voltage stability, reduce system power loss and increase operational security. This thesis has three key contributions: Firstly, a decentralized coordinated reactive power control strategy is proposed to tackle voltage fluctuation issues due to the uncertainty of output of DG. Case study shows results of coordinated control methods which can regulate the voltage level effectively whilst also enlarging the total reactive power capability to reduce the possibility of active power curtailment. Subsequently, the communication system time-delay is considered when analyzing the impact of voltage regulation. Secondly, a consensus distributed alternating direction multiplier method (ADMM) algorithm is improved to solve the optimal power ow (OPF) problem. Both synchronous and asynchronous algorithms are proposed to study the performance of convergence rate. Four different strategies are proposed to mitigate the impact of time-delay. Simulation results show that the optimization of reactive power allocation can minimize system power loss effectively and the proposed weighted autoregressive (AR) strategies can achieve an effective convergence result. Thirdly, a neighboring monitoring scheme based on the reputation rating is proposed to detect and mitigate the potential false data injection attack. The simulation results show that the predictive value can effectively replace the manipulated data. The convergence results based on the predictive value can be very close to the results of normal case without cyber attack

    Cryptographic security of quantum key distribution

    Full text link
    This work is intended as an introduction to cryptographic security and a motivation for the widely used Quantum Key Distribution (QKD) security definition. We review the notion of security necessary for a protocol to be usable in a larger cryptographic context, i.e., for it to remain secure when composed with other secure protocols. We then derive the corresponding security criterion for QKD. We provide several examples of QKD composed in sequence and parallel with different cryptographic schemes to illustrate how the error of a composed protocol is the sum of the errors of the individual protocols. We also discuss the operational interpretations of the distance metric used to quantify these errors.Comment: 31+23 pages. 28 figures. Comments and questions welcom

    Discrete Moving Target Defense Application and Benchmarking in Software-Defined Networking

    Get PDF
    Moving Target Defense is a technique focused on disrupting certain phases of a cyber-attack. The static nature of the existing networks gives the adversaries an adequate amount of time to gather enough data concerning the target and succeed in mounting an attack. The random host address mutation is a well-known MTD technique that hides the actual IP address from external scanners. When the host establishes a session of transmitting or receiving data, due to mutation interval, the session is interrupted, leading to the host’s unavailability. Moving the network configuration creates overhead on the controller and additional switching costs resulting in latency, poor performance, packet loss, and jitter. In this dissertation, we proposed a novel discrete MTD technique in software-defined networking (SDN) to individualize the mutation interval for each host. The host IP address is changed at different intervals to avoid the termination of the existing sessions and to increase complexity in understanding mutation intervals for the attacker. We use the flow statistics of each host to determine if the host is in a session of transmitting or receiving data. Individualizing the mutation interval of each host enhances the defender game strategy making it complex in determining the pattern of mutation interval. Since the mutation of the host address is achieved using a pool of virtual (temporary) host addresses, a subnet game strategy is introduced to increase complexity in determining the network topology. A benchmarking framework is developed to measure the performance, scalability, and reliability of the MTD network with the traditional network. The analysis shows the discrete MTD network outperforms the random MTD network in all tests

    Wireless Sensor Network: At a Glance

    Get PDF

    Implementing PBFT using Reactive programming and asynchronous workflows

    Get PDF
    Consensus algorithms are notorious for being both difficult to understand and even harder to implement. Several frameworks and programming paradigms have been introduced to help make consensus algorithms easier to design and implement. One of these frameworks is the .NET Cleipnir framework which primarily focuses on making it simpler to develop a persistent consensus algorithm. In addition, Cleipnir supports functionality that makes both asynchronous and reactive programming paradigms easier for a developer to utilize in their implementation. We want to determine if the Cleipnir framework and the related programming paradigms can help design a simple and understandable consensus algorithm. To accomplish this task, we create a \acl{pbft} implementation that has its protocol workflow run as orderly and synchronous as possible using the Cleipnir framework and the aforementioned protocol paradigms. Furthermore, we evaluate each of the previously mentioned tools to ascertain how they benefit and hinder our implementation. We discover that the benefits heavily outrank the disadvantages for both programming paradigms and works well together. We conclude that the Cleipnir framework does provide helpful tools for the implementation of consensus algorithms. We further learn that the algorithm’s complexity can heavily affect the level of simplicity that can be provided to the algorithm workflow without the loss of functionality
    corecore