43 research outputs found
Security Enhanced Symmetric Key Encryption Employing an Integer Code for the Erasure Channel
An instance of the framework for cryptographic security enhancement of symmetric-key encryption employing a dedicated error correction encoding is addressed. The main components of the proposal are: (i) a dedicated error correction coding and (ii) the use of a dedicated simulator of the noisy channel. The proposed error correction coding is designed for the binary erasure channel where at most one bit is erased in each codeword byte. The proposed encryption has been evaluated in the traditional scenario where we consider the advantage of an attacker to correctly decide to which of two known messages the given ciphertext corresponds. The evaluation shows that the proposed encryption provides a reduction of the considered attacker’s advantage in comparison with the initial encryption setting. The implementation complexity of the proposed encryption is considered, and it implies a suitable trade-off between increased security and increased implementation complexity
Sparse graph codes for compression, sensing, and secrecy
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D
NASA Tech Briefs, September 2010
Topics covered include: Instrument for Measuring Thermal Conductivity of Materials at Low Temperatures; Multi-Axis Accelerometer Calibration System; Pupil Alignment Measuring Technique and Alignment Reference for Instruments or Optical Systems; Autonomous System for Monitoring the Integrity of Composite Fan Housings; A Safe, Self-Calibrating, Wireless System for Measuring Volume of Any Fuel at Non-Horizontal Orientation; Adaptation of the Camera Link Interface for Flight-Instrument Applications; High-Performance CCSDS Encapsulation Service Implementation in FPGA; High-Performance CCSDS AOS Protocol Implementation in FPGA; Advanced Flip Chips in Extreme Temperature Environments; Diffuse-Illumination Systems for Growing Plants; Microwave Plasma Hydrogen Recovery System; Producing Hydrogen by Plasma Pyrolysis of Methane; Self-Deployable Membrane Structures; Reactivation of a Tin-Oxide-Containing Catalys; Functionalization of Single-Wall Carbon Nanotubes by Photo-Oxidation; Miniature Piezoelectric Macro-Mass Balance; Acoustic Liner for Turbomachinery Applications; Metering Gas Strut for Separating Rocket Stages; Large-Flow-Area Flow-Selective Liquid/Gas Separator; Counterflowing Jet Subsystem Design; Water Tank with Capillary Air/Liquid Separation; True Shear Parallel Plate Viscometer; Focusing Diffraction Grating Element with Aberration Control; Universal Millimeter-Wave Radar Front End; Mode Selection for a Single-Frequency Fiber Laser; Qualification and Selection of Flight Diode Lasers for Space Applications; Plenoptic Imager for Automated Surface Navigation; Maglev Facility for Simulating Variable Gravity; Hybrid AlGaN-SiC Avalanche Photodiode for Deep-UV Photon Detection; High-Speed Operation of Interband Cascade Lasers; 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events; Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates; Hidden Statistics Approach to Quantum Simulations; Reconstituted Three-Dimensional Interactive Imaging; Determining Atmospheric-Density Profile of Titan; Digital Microfluidics Sample Analyzer; Radiation Protection Using Carbon Nanotube Derivatives; Process to Selectively Distinguish Viable from Non-Viable Bacterial Cells; and TEAMS Model Analyzer
저밀도 부호의 응용: 묶음 지그재그 파운틴 부호와 WOM 부호
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 노종선.This dissertation contains the following two contributions on the applications of sparse codes.
Fountain codes Batched zigzag (BZ) fountain codes
– Two-phase batched zigzag (TBZ) fountain codes
Write-once memory (WOM) codes
– WOM codes implemented by rate-compatible low-density generator matrix (RC-LDGM) codes
First, two classes of fountain codes, called batched zigzag fountain codes and two-phase batched zigzag fountain codes, are proposed for the symbol erasure channel. At a cost of slightly lengthened code symbols, the involved message symbols in each batch of the proposed codes can be recovered by low complexity zigzag decoding algorithm. Thus, the proposed codes have low buffer occupancy during decoding process. These features are suitable for receivers with limited hardware resources in the broadcasting channel. A method to obtain degree distributions of code symbols for the proposed codes via ripple size evolution is also proposed by taking into account the released code symbols from the batches. It is shown that the proposed codes outperform Luby transform codes and zigzag decodable fountain codes with respect to intermediate recovery rate and coding overhead when message length is short, symbol erasure rate is low, and available buffer size is limited.
In the second part of this dissertation, WOM codes constructed by sparse codes are presented. Recently, WOM codes are adopted to NAND flash-based solid-state drive (SSD) in order to extend the lifetime by reducing the number of erasure operations. Here, a new rewriting scheme for the SSD is proposed, which is implemented by multiple binary erasure quantization (BEQ) codes. The corresponding BEQ codes are constructed by RC-LDGM codes. Moreover, by putting RC-LDGM codes together with a page selection method, writing efficiency can be improved. It is verified via simulation that the SSD with proposed rewriting scheme outperforms the SSD without and with the conventional WOM codes for single level cell (SLC) and multi-level cell (MLC) flash memories.1 Introduction 1
1.1 Background 1
1.2 Overview of Dissertation 5
2 Sparse Codes 7
2.1 Linear Block Codes 7
2.2 LDPC Codes 9
2.3 Message Passing Decoder 11
3 New Fountain Codes with Improved Intermediate Recovery Based on Batched Zigzag Coding 13
3.1 Preliminaries 17
3.1.1 Definitions and Notation 17
3.1.2 LT Codes 18
3.1.3 Zigzag Decodable Codes 20
3.1.4 Bit-Level Overhead 22
3.2 New Fountain Codes Based on Batched Zigzag Coding 23
3.2.1 Construction of Shift Matrix 24
3.2.2 Encoding and Decoding of the Proposed BZ Fountain Codes 25
3.2.3 Storage and Computational Complexity 28
3.3 Degree Distribution of BZ Fountain Codes 31
3.3.1 Relation Between and 31
3.3.2 Derivation of via Ripple Size Evolution 32
3.4 Two-Phase Batched Zigzag Fountain Codes with Additional Memory 40
3.4.1 Code Construction 41
3.4.2 Bit-Level Overhead 46
3.5 Numerical Analysis 49
4 Write-Once Memory Codes Using Rate-Compatible LDGM Codes 60
4.1 Preliminaries 62
4.1.1 NAND Flash Memory 62
4.1.2 Rewriting Schemes for Flash Memory 62
4.1.3 Construction of Rewriting Codes by BEQ Codes 65
4.2 Proposed Rewriting Codes 67
4.2.1 System Model 67
4.2.2 Multi-rate Rewriting Codes 68
4.2.3 Page Selection for Rewriting 70
4.3 RC-LDGM Codes 74
4.4 Numerical Analysis 76
5 Conclusions 80
Bibliography 82
초록 94Docto
Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology
L'abstract è presente nell'allegato / the abstract is in the attachmen
Efficient Information Reconciliation for Quantum Key Distribution = Reconciliación eficiente de información para la distribución cuántica de claves
Advances in modern cryptography for secret-key agreement are driving the development of new methods and techniques in key distillation. Most of these developments, focusing on information reconciliation and privacy amplification, are for the direct benefit of quantum key distribution (QKD). In this context, information reconciliation has historically been done using heavily interactive protocols, i.e. with a high number of channel communications, such as the well-known Cascade. In this work we show how modern coding techniques can improve the performance of these methods for information reconciliation in QKD. Here, we propose the use of low-density parity-check (LDPC) codes, since they are good both in efficiency and throughput. A price to pay, a priori, using LDPC codes is that good efficiency is only attained for very long codes and in a very narrow range of error rates. This forces to use several codes in cases when the error rate varies significantly in different uses of the channel, a common situation for instance in QKD. To overcome these problems, this study examines various techniques for adapting LDPC codes, thus reducing the number of codes needed to cover the target range of error rates. These techniques are also used to improve the average efficiency of short-length LDPC codes based on a feedback coding scheme. The importance of short codes lies in the fact that they can be used for high throughput hardware implementations. In a further advancement, a protocol is proposed that avoids the a priori error rate estimation required in other approaches. This blind protocol also brings interesting implications to the finite key analysis. Los avances en la criptografía moderna para el acuerdo de clave secreta están empujando el desarrollo de nuevos métodos y técnicas para la destilación de claves. La mayoría de estos desarrollos, centrados en la reconciliación de información y la amplificación de privacidad, proporcionan un beneficio directo para la distribución cuántica de claves (QKD). En este contexto, la reconciliación de información se ha realizado históricamente por medio de protocolos altamente interativos, es decir, con un alto número de comunicaciones, tal y como ocurre con el protocolo Cascade. En este trabajo mostramos cómo las técnicas de codificación modernas pueden mejorar el rendimiento de estos métodos para la reconciliación de información en QKD. Proponemos el uso de códigos low-density parity-check (LDPC), puesto que estos son buenos tanto en eficiencia como en tasa de corrección. Un precio a pagar, a priori, utilizando códigos LDPC es que una buena eficiencia sólo se alcanza para códigos muy largos y en un rango de error limitado. Este hecho nos obliga a utilizar varios códigos en aquellos casos en los que la tasa de error varía significativamente para distintos usos del canal, una situación común por ejemplo en QKD. Para superar estos problemas, en este trabajo analizamos varias técnicas para la adaptación de códigos LDPC, y así poder reducir el número de códigos necesarios para cubrir el rango de errores deseado. Estas técnicas son también utilizadas para mejorar la eficiencia promedio de códigos LDPC cortos en un esquema de codificación con retroalimentación o realimentación (mensaje de retorno). El interés de los códigos cortos reside en el vii hecho de que estos pueden ser utilizados para implementaciones hardware de alto rendimiento. En un avance posterior, proponemos un nuevo protocolo que evita la estimación inicial de la tasa de error, requerida en otras propuestas. Este protocolo ciego también nos brinda implicaciones interesantes en el análisis de clave finita
Opportunistic Routing with Network Coding in Powerline Communications
Opportunistic Routing (OR) can be used as an alternative to the legacy routing (LR) protocols in networks with a broadcast lossy channel and possibility of overhearing the signal. The power line medium creates such an environment. OR can better exploit the channel than LR because it allows the cooperation of all nodes that receive any data. With LR, only a chain of nodes is selected for communication. Other nodes drop the received information. We investigate OR for the one-source one-destination scenario with one traffic flow. First, we evaluate the upper bound on the achievable data rate and advocate the decentralized algorithm for its calculation. This knowledge is used in the design of Basic Routing Rules (BRR). They use the link quality metric that equals the upper bound on the achievable data rate between the given node and the destination. We call it the node priority. It considers the possibility of multi-path communication and the packet loss correlation. BRR allows achieving the optimal data rate pertaining certain theoretical assumptions. The Extended BRR (BRR-E) are free of them. The major difference between BRR and BRR-E lies in the usage of Network Coding (NC) for prognosis of the feedback. In this way, the protocol overhead can be severely reduced. We also study Automatic Repeat-reQuest (ARQ) mechanism that is applicable with OR. It differs to ARQ with LR in that each sender has several sinks and none of the sinks except destination require the full recovery of the original message. Using BRR-E, ARQ and other services like network initialization and link state control, we design the Advanced Network Coding based Opportunistic Routing protocol (ANChOR). With the analytic and simulation results we demonstrate the near optimum performance of ANChOR. For the triangular topology, the achievable data rate is just 2% away from the theoretical maximum and it is up to 90% higher than it is possible to achieve with LR. Using the G.hn standard, we also show the full protocol stack simulation results (including IP/UDP and realistic channel model). In this simulation we revealed that the gain of OR to LR can be even more increased by reducing the head-of-the-line problem in ARQ. Even considering the ANChOR overhead through additional headers and feedbacks, it outperforms the original G.hn setup in data rate up to 40% and in latency up to 60%.:1 Introduction 2
1.1 Intra-flow Network Coding 6
1.2 Random Linear Network Coding (RLNC) 7
2 Performance Limits of Routing Protocols in PowerLine Communications (PLC) 13
2.1 System model 14
2.2 Channel model 14
2.3 Upper bound on the achievable data rate 16
2.4 Achieving the upper bound data rate 17
2.5 Potential gain of Opportunistic Routing Protocol (ORP) over Common Single-path Routing Protocol (CSPR) 19
2.6 Evaluation of ORP potential 19
3 Opportunistic Routing: Realizations and Challenges 24
3.1 Vertex priority and cooperation group 26
3.2 Transmission policy in idealized network 34
3.2.1 Basic Routing Rules (BRR) 36
3.3 Transmission policy in real network 40
3.3.1 Purpose of Network Coding (NC) in ORP 41
3.3.2 Extended Basic Routing Rules (BRR) (BRR-E) 43
3.4 Automatic ReQuest reply (ARQ) 50
3.4.1 Retransmission request message contents 51
3.4.2 Retransmission Request (RR) origination and forwarding 66
3.4.3 Retransmission response 67
3.5 Congestion control 68
3.5.1 Congestion control in our work 70
3.6 Network initialization 74
3.7 Formation of the cooperation groups (coalitions) 76
3.8 Advanced Network Coding based Opportunistic Routing protocol (ANChOR) header 77
3.9 Communication of protocol information 77
3.10 ANChOR simulation . .79
3.10.1 ANChOR information in real time .80
3.10.2 Selection of the coding rate 87
3.10.3 Routing Protocol Information (RPI) broadcasting frequency 89
3.10.4 RR contents 91
3.10.5 Selection of RR forwarder 92
3.10.6 ANChOR stability 92
3.11 Summary 95
4 ANChOR in the Gigabit Home Network (G.hn) Protocol 97
4.1 Compatibility with the PLC protocol stack 99
4.2 Channel and noise model 101
4.2.1 In-home scenario 102
4.2.2 Access network scenario 102
4.3 Physical layer (PHY) layer implementation 102
4.3.1 Bit Allocation Algorithm (BAA) 103
4.4 Multiple Access Control layer (MAC) layer 109
4.5 Logical Link Control layer (LLC) layer 111
4.5.1 Reference Automatic Repeat reQuest (ARQ) 111
4.5.2 Hybrid Automatic Repeat reQuest (HARQ) in ANChOR 114
4.5.3 Modeling Protocol Data Unit (PDU) erasures on LLC 116
4.6 Summary 117
5 Study of G.hn with ANChOR 119
5.1 ARQ analysis 119
5.2 Medium and PHY requirements for “good” cooperation 125
5.3 Access network scenario 128
5.4 In-home scenario 135
5.4.1 Modeling packet erasures 136
5.4.2 Linear Dependence Ratio (LDR) 139
5.4.3 Worst case scenario 143
5.4.4 Analysis of in-home topologies 145
6 Conclusions . . . . . . . . . . . . . . . 154
A Proof of the neccessity of the exclusion rule 160
B Gain of ORPs to CSRPs 163
C Broadcasting rule 165
D Proof of optimality of BRR for triangular topology 167
E Reducing the retransmission probability 168
F Calculation of Expected Average number of transmissions (EAX) for topologies with bi-directional links 170
G Feedback overhead of full coding matrices 174
H Block diagram of G.hn physical layer in ns-3 model 175
I PER to BER mapping 17