57 research outputs found

    A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage

    Get PDF
    Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels

    A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage

    Get PDF
    Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels

    On Code Design for Interference Channels

    Get PDF
    abstract: There has been a lot of work on the characterization of capacity and achievable rate regions, and rate region outer-bounds for various multi-user channels of interest. Parallel to the developed information theoretic results, practical codes have also been designed for some multi-user channels such as multiple access channels, broadcast channels and relay channels; however, interference channels have not received much attention and only a limited amount of work has been conducted on them. With this motivation, in this dissertation, design of practical and implementable channel codes is studied focusing on multi-user channels with special emphasis on interference channels; in particular, irregular low-density-parity-check codes are exploited for a variety of cases and trellis based codes for short block length designs are performed. Novel code design approaches are first studied for the two-user Gaussian multiple access channel. Exploiting Gaussian mixture approximation, new methods are proposed wherein the optimized codes are shown to improve upon the available designs and off-the-shelf point-to-point codes applied to the multiple access channel scenario. The code design is then examined for the two-user Gaussian interference channel implementing the Han-Kobayashi encoding and decoding strategy. Compared with the point-to-point codes, the newly designed codes consistently offer better performance. Parallel to this work, code design is explored for the discrete memoryless interference channels wherein the channel inputs and outputs are taken from a finite alphabet and it is demonstrated that the designed codes are superior to the single user codes used with time sharing. Finally, the code design principles are also investigated for the two-user Gaussian interference channel employing trellis-based codes with short block lengths for the case of strong and mixed interference levels.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Devices and Fibers for Ultrawideband Optical Communications

    Get PDF
    Wavelength-division multiplexing (WDM) has historically enabled the increase in the capacity of optical systems by progressively populating the existing optical bandwidth of erbium-doped fiber amplifiers (EDFAs) in the C-band. Nowadays, the number of channels—needed in optical systems—is approaching the maximum capacity of standard C-band EDFAs. As a result, the industry worked on novel approaches, such as the use of multicore fibers, the extension of the available spectrum of the C-band EDFAs, and the development of transmission systems covering C- and L-bands and beyond. In the context of continuous traffic growth, ultrawideband (UWB) WDM transmission systems appear as a promising technology to leverage the bandwidth of already deployed optical fiber infrastructure and sustain the traffic demand for the years to come. Since the pioneering demonstrations of UWB transmission a few years ago, long strides have been taken toward UWB technologies. In this review article, we discuss how the most recent advances in the design and fabrication of enabling devices, such as lasers, amplifiers, optical switches, and modulators, have improved the performance of UWB systems, paving the way to turn research demonstrations into future products. In addition, we also report on the advances in UWB optical fibers, such as the recently introduced nested antiresonant nodeless fibers (NANFs), whose future implementations could potentially provide up to 300-nm-wide bandwidth at less than 0.2 dB/km loss

    Source-channel coding for robust image transmission and for dirty-paper coding

    Get PDF
    In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other

    Reliable chip design from low powered unreliable components

    Get PDF
    The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively

    NASA Tech Briefs, September 2009

    Get PDF
    opics covered include: Filtering Water by Use of Ultrasonically Vibrated Nanotubes; Computer Code for Nanostructure Simulation; Functionalizing CNTs for Making Epoxy/CNT Composites; Improvements in Production of Single-Walled Carbon Nanotubes; Progress Toward Sequestering Carbon Nanotubes in PmPV; Two-Stage Variable Sample-Rate Conversion System; Estimating Transmitted-Signal Phase Variations for Uplink Array Antennas; Board Saver for Use with Developmental FPGAs; Circuit for Driving Piezoelectric Transducers; Digital Synchronizer without Metastability; Compact, Low-Overhead, MIL-STD-1553B Controller; Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM; Differential InP HEMT MMIC Amplifiers Embedded in Waveguides; Improved Aerogel Vacuum Thermal Insulation; Fluoroester Co-Solvents for Low-Temperature Li+ Cells; Using Volcanic Ash to Remove Dissolved Uranium and Lead; High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell; Silicon Wafer-Scale Substrate for Microshutters and Detector Arrays; Micro-Horn Arrays for Ultrasonic Impedance Matching; Improved Controller for a Three-Axis Piezoelectric Stage; Nano-Pervaporation Membrane with Heat Exchanger Generates Medical-Grade Water; Micro-Organ Devices; Nonlinear Thermal Compensators for WGM Resonators; Dynamic Self-Locking of an OEO Containing a VCSEL; Internal Water Vapor Photoacoustic Calibration; Mid-Infrared Reflectance Imaging of Thermal-Barrier Coatings; Improving the Visible and Infrared Contrast Ratio of Microshutter Arrays; Improved Scanners for Microscopic Hyperspectral Imaging; Rate-Compatible LDPC Codes with Linear Minimum Distance; PrimeSupplier Cross-Program Impact Analysis and Supplier Stability Indicator Simulation Model; Integrated Planning for Telepresence With Time Delays; Minimizing Input-to-Output Latency in Virtual Environment; Battery Cell Voltage Sensing and Balancing Using Addressable Transformers; Gaussian and Lognormal Models of Hurricane Gust Factors; Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft; Integrated Modeling of Spacecraft Touch-and-Go Sampling; Spacecraft Station-Keeping Trajectory and Mission Design Tools; Efficient Model-Based Diagnosis Engine; and DSN Simulator

    Compressed Sensing of Memoryless Sources:A Deterministic Hadamard Construction

    Get PDF
    Compressed sensing is a new trend in signal processing for efficient sampling and signal acquisition. The idea is that most real-world signals have a sparse representation in an appropriate basis and this can be exploited to capture the sparse signal by taking only a few linear projections. The recovery is possible by running appropriate low-complexity algorithms that exploit the sparsity (prior information) to reconstruct the signal from the linear projections (posterior information). The main benefit is that the required number of measurements is much smaller than the dimension of the signal. This results in a huge gain in sensor cost (in measurement devices) or a dramatic saving in data acquisition time. However, some difficulties naturally arise in applying the compressed sensing to real-world applications such as robustness issues in taking the linear projections and computational complexity of the recovery algorithm. In this thesis, we design structured matrices for compressed sensing. In particular, we claim that some of the practical difficulties can be reasonably solved by imposing some structure on the measurement matrices. The thesis evolves around the Hadamard matrices which are {+1,−1}\{+1,-1\}-valued matrices with many applications in signal processing, coding, optics and mathematics. As the title of the thesis implies, there are two main ingredients to this thesis. First, we use a memoryless assumption for the source, i.e., we assume that the nonzero components of the sparse signal are independently generated by a given probability distribution and their position is completely random. This allows us to use tools from probability, information theory and coding theory to rigorously assess the achievable performance. Second, using the mathematical properties of the Hadamard matrices, we design measurement matrices by selecting specific rows of a Hadamard matrix according to a deterministic criterion. We call the resulting matrices ``partial Hadamard matrices''. We design partial Hadamard matrices for three signal models: memoryless discrete signals and sparse signals with linear or sub-linear sparsity. A signal has linear sparsity if the number kk of its nonzero components is proportional to nn, the dimension of signal, whereas it has a sub-linear sparsity if kk scales like O(nα)O(n^\alpha) for some α∈(0,1)\alpha \in (0,1). We develop tools to rigorously analyze the performance of the proposed constructions by borrowing ideas from information theory and coding theory. We also extend our construction to distributed (multi-terminal) signals. Distributed compressed sensing is a ubiquitous problem in distributed data acquisition systems such as ad-hoc sensor networks. From both a theoretical and an engineering point of view, it is important to know how many measurement per dimension are necessary from different terminals in order to have a reliable estimate of the distributed data. We theoretically analyze this problem for a very simple setup where the components of the distributed signal are generated by a joint probability distribution which captures the spatial correlation among different terminals. We give an information-theoretic characterization of the measurements-rate region that results in a negligible recovery distortion. We also propose a low-complexity distributed message passing algorithm to achieve the theoretical limits
    • …
    corecore