166 research outputs found

    Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey

    Full text link
    This paper provides a comprehensive review of the domain of physical layer security in multiuser wireless networks. The essential premise of physical-layer security is to enable the exchange of confidential messages over a wireless medium in the presence of unauthorized eavesdroppers without relying on higher-layer encryption. This can be achieved primarily in two ways: without the need for a secret key by intelligently designing transmit coding strategies, or by exploiting the wireless communication medium to develop secret keys over public channels. The survey begins with an overview of the foundations dating back to the pioneering work of Shannon and Wyner on information-theoretic security. We then describe the evolution of secure transmission strategies from point-to-point channels to multiple-antenna systems, followed by generalizations to multiuser broadcast, multiple-access, interference, and relay networks. Secret-key generation and establishment protocols based on physical layer mechanisms are subsequently covered. Approaches for secrecy based on channel coding design are then examined, along with a description of inter-disciplinary approaches based on game theory and stochastic geometry. The associated problem of physical-layer message authentication is also introduced briefly. The survey concludes with observations on potential research directions in this area.Comment: 23 pages, 10 figures, 303 refs. arXiv admin note: text overlap with arXiv:1303.1609 by other authors. IEEE Communications Surveys and Tutorials, 201

    A HOLISTIC REDUNDANCY- AND INCENTIVE-BASED FRAMEWORK TO IMPROVE CONTENT AVAILABILITY IN PEER-TO-PEER NETWORKS

    Get PDF
    Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide high-levels of content availability, which limit their applicability and adoption. This dissertation takes a holistic approach to device mechanisms to improve content availability in large-scale P2P systems. Content availability in P2P can be impacted by hardware failures and churn. Hardware failures, in the form of disk or node failures, render information inaccessible. Churn, an inherent property of P2P, is the collective effect of the users’ uncoordinated behavior, which occurs when a large percentage of nodes join and leave frequently. Such a behavior reduces content availability significantly. Mitigating the combined effect of hardware failures and churn on content availability in P2P requires new and innovative solutions that go beyond those applied in existing distributed systems. To addresses this challenge, the thesis proposes two complementary, low cost mechanisms, whereby nodes self-organize to overcome failures and improve content availability. The first mechanism is a low complexity and highly flexible hybrid redundancy scheme, referred to as Proactive Repair (PR). The second mechanism is an incentive-based scheme that promotes cooperation and enforces fair exchange of resources among peers. These mechanisms provide the basis for the development of distributed self-organizing algorithms to automate PR and, through incentives, maximize their effectiveness in realistic P2P environments. Our proposed solution is evaluated using a combination of analytical and experimental methods. The analytical models are developed to determine the availability and repair cost properties of PR. The results indicate that PR’s repair cost outperforms other redundancy schemes. The experimental analysis was carried out using simulation and the development of a testbed. The simulation results confirm that PR improves content availability in P2P. The proposed mechanisms are implemented and tested using a DHT-based P2P application environment. The experimental results indicate that the incentive-based mechanism can promote fair exchange of resources and limits the impact of uncooperative behaviors such as “free-riding”

    Error-Correcting Codes for Networks, Storage and Computation

    Get PDF
    The advent of the information age has bestowed upon us three challenges related to the way we deal with data. Firstly, there is an unprecedented demand for transmitting data at high rates. Secondly, the massive amounts of data being collected from various sources needs to be stored across time. Thirdly, there is a need to process the data collected and perform computations on it in order to extract meaningful information out of it. The interconnected nature of modern systems designed to perform these tasks has unraveled new difficulties when it comes to ensuring their resilience against sources of performance degradation. In the context of network communication and distributed data storage, system-level noise and adversarial errors have to be combated with efficient error correction schemes. In the case of distributed computation, the heterogeneous nature of computing clusters can potentially diminish the speedups promised by parallel algorithms, calling for schemes that mitigate the effect of slow machines and communication delay. This thesis addresses the problem of designing efficient fault tolerance schemes for the three scenarios just described. In the network communication setting, a family of multiple-source multicast networks that employ linear network coding is considered for which capacity-achieving distributed error-correcting codes, based on classical algebraic constructions, are designed. The codes require no coordination between the source nodes and are end to end: except for the source nodes and the destination node, the operation of the network remains unchanged. In the context of data storage, balanced error-correcting codes are constructed so that the encoding effort required is balanced out across the storage nodes. In particular, it is shown that for a fixed row weight, any cyclic Reed-Solomon code possesses a generator matrix in which the number of nonzeros is the same across the columns. In the balanced and sparsest case, where each row of the generator matrix is a minimum distance codeword, the maximal encoding time over the storage nodes is minimized, a property that is appealing in write-intensive settings. Analogous constructions are presented for a locally recoverable code construction due to Tamo and Barg. Lastly, the problem of mitigating stragglers in a distributed computation setup is addressed, where a function of some dataset is computed in parallel. Using Reed-Solomon coding techniques, a scheme is proposed that allows for the recovery of the function under consideration from the minimum number of machines possible. The only assumption made on the function is that it is additively separable, which renders the scheme useful in distributed gradient descent implementations. Furthermore, a theoretical model for the run time of the scheme is presented. When the return time of the machines is modeled probabilistically, the model can be used to optimally pick the scheme's parameters so that the expected computation time is minimized. The recovery is performed using an algorithm that runs in quadratic time and linear space, a notable improvement compared to state-of-the-art schemes. The unifying theme of the three scenarios is the construction of error-correcting codes whose encoding functions adhere to certain constraints. It is shown that in many cases, these constraints can be satisfied by classical constructions. As a result, the schemes presented are deterministic, operate over small finite fields and can be decoded using efficient algorithms.</p

    Optimization Methods Applied to Power Systems â…ˇ

    Get PDF
    Electrical power systems are complex networks that include a set of electrical components that allow distributing the electricity generated in the conventional and renewable power plants to distribution systems so it can be received by final consumers (businesses and homes). In practice, power system management requires solving different design, operation, and control problems. Bearing in mind that computers are used to solve these complex optimization problems, this book includes some recent contributions to this field that cover a large variety of problems. More specifically, the book includes contributions about topics such as controllers for the frequency response of microgrids, post-contingency overflow analysis, line overloads after line and generation contingences, power quality disturbances, earthing system touch voltages, security-constrained optimal power flow, voltage regulation planning, intermittent generation in power systems, location of partial discharge source in gas-insulated switchgear, electric vehicle charging stations, optimal power flow with photovoltaic generation, hydroelectric plant location selection, cold-thermal-electric integrated energy systems, high-efficiency resonant devices for microwave power generation, security-constrained unit commitment, and economic dispatch problems

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Total Quality Management and Six Sigma

    Get PDF
    In order to survive in a modern and competitive environment, organizations need to carefully organize their activities regarding quality management. TQM and six sigma are the approaches that have been successful in solving intricate quality problems in products and services. This volume can help those who are interested in the quality management field to understand core ideas along with contemporary efforts done in the field and authored as case studies in this volume. This volume may be useful to students, academics and practitioners across diversified disciplines

    An Approach for the Development of Complex Systems Archetypes

    Get PDF
    The purpose of this research is to explore the principles and concepts of systems theory in pursuit of a collection of complex systems archetypes that can be used for system exploration and diagnostics. The study begins with an examination of the archetypes and classification systems that already exist in the domain of systems theory. This review includes a critique of their purpose, structure, and general applicability. The research then develops and employs a new approach to grounded theory, using a visual coding model to explore the origins, relationships, and meanings of the principles of systems theory. The goal of the visual grounded theory approach is to identity underlying, recurrent imagery in the systems literature that will form the basis for the archetypes. Using coding models derived from the literature, the study then examines the interrelationships between system principles. These relationships are used to clearly define the environment where the archetypes are found in terms of energy, entropy and time. A collection of complex system archetypes is then derived which are firmly rooted in the literature, as well as being demonstrably manifested in the real world. The definitions of the emerging complex systems archetypes are consistent with the environmental definition and are governed by the system’s behavior related to energy collection, entropy displacement, and the pursuit of viability. Once the archetypes have been identified, this study examines the similarities and differences that distinguish them. The individual system principles that either define or differentiate each of the archetypes are described, and real-world manifestations of the archetypes are discussed. The collection of archetypes is then examined as a continuum, where they are related to one another in terms of energy use, entropy accumulation, self-modification and external-modification. To illustrate the applicability of these archetypes, a case study is undertaken which examines a medium-sized organization with multiple departments in an industrial setting. The individual departments are discussed in detail, and their archetypical forms are identified and described. Finally, the study examines future applications for the archetypes and other research that might enhance their utility for complex systems governance
    • …
    corecore