159 research outputs found

    Particle smoothing techniques with turbo principle for MIMO systems

    Get PDF

    Optimizing the Bit-flipping Method for Decoding Low-density Parity-check Codes in Wireless Networks by Using the Artificial Spider Algorithm

    Get PDF
    In this paper, the performance of Low-Density Parity-Check (LDPC) codes is improved, which leads to reduce the complexity of hard-decision Bit-Flipping (BF) decoding by utilizing the Artificial Spider Algorithm (ASA). The ASA is used to solve the optimization problem of decoding thresholds. Two decoding thresholds are used to flip multiple bits in each round of iteration to reduce the probability of errors and accelerate decoding convergence speed while improving decoding performance. These errors occur every time the bits are flipped. Then, the BF algorithm with a low-complexity optimizer only requires real number operations before iteration and logical operations in each iteration. The ASA is better than the optimized decoding scheme that uses the Particle Swarm Optimization (PSO) algorithm. The proposed scheme can improve the performance of wireless network applications with good proficiency and results. Simulation results show that the ASA-based algorithm for solving highly nonlinear unconstrained problems exhibits fast decoding convergence speed and excellent decoding performance. Thus, it is suitable for applications in broadband wireless networks

    Low-Power Embedded Design Solutions and Low-Latency On-Chip Interconnect Architecture for System-On-Chip Design

    Get PDF
    This dissertation presents three design solutions to support several key system-on-chip (SoC) issues to achieve low-power and high performance. These are: 1) joint source and channel decoding (JSCD) schemes for low-power SoCs used in portable multimedia systems, 2) efficient on-chip interconnect architecture for massive multimedia data streaming on multiprocessor SoCs (MPSoCs), and 3) data processing architecture for low-power SoCs in distributed sensor network (DSS) systems and its implementation. The first part includes a low-power embedded low density parity check code (LDPC) - H.264 joint decoding architecture to lower the baseband energy consumption of a channel decoder using joint source decoding and dynamic voltage and frequency scaling (DVFS). A low-power multiple-input multiple-output (MIMO) and H.264 video joint detector/decoder design that minimizes energy for portable, wireless embedded systems is also designed. In the second part, a link-level quality of service (QoS) scheme using unequal error protection (UEP) for low-power network-on-chip (NoC) and low latency on-chip network designs for MPSoCs is proposed. This part contains WaveSync, a low-latency focused network-on-chip architecture for globally-asynchronous locally-synchronous (GALS) designs and a simultaneous dual-path routing (SDPR) scheme utilizing path diversity present in typical mesh topology network-on-chips. SDPR is akin to having a higher link width but without the significant hardware overhead associated with simple bus width scaling. The last part shows data processing unit designs for embedded SoCs. We propose a data processing and control logic design for a new radiation detection sensor system generating data at or above Peta-bits-per-second level. Implementation results show that the intended clock rate is achieved within the power target of less than 200mW. We also present a digital signal processing (DSP) accelerator supporting configurable MAC, FFT, FIR, and 3-D cross product operations for embedded SoCs. It consumes 12.35mW along with 0.167mm2 area at 333MHz

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    NASA Tech Briefs, October 2009

    Get PDF
    Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes

    Channel Coding in Molecular Communication

    Get PDF
    This dissertation establishes and analyzes a complete molecular transmission system from a communication engineering perspective. Its focus is on diffusion-based molecular communication in an unbounded three-dimensional fluid medium. As a basis for the investigation of transmission algorithms, an equivalent discrete-time channel model (EDTCM) is developed and the characterization of the channel is described by an analytical derivation, a random walk based simulation, a trained artificial neural network (ANN), and a proof of concept testbed setup. The investigated transmission algorithms cover modulation schemes at the transmitter side, as well as channel equalizers and detectors at the receiver side. In addition to the evaluation of state-of-the-art techniques and the introduction of orthogonal frequency-division multiplexing (OFDM), the novel variable concentration shift keying (VCSK) modulation adapted to the diffusion-based transmission channel, the lowcomplex adaptive threshold detector (ATD) working without explicit channel knowledge, the low-complex soft-output piecewise linear detector (PLD), and the optimal a posteriori probability (APP) detector are of particular importance and treated. To improve the error-prone information transmission, block codes, convolutional codes, line codes, spreading codes and spatial codes are investigated. The analysis is carried out under various approaches of normalization and gains or losses compared to the uncoded transmission are highlighted. In addition to state-of-the-art forward error correction (FEC) codes, novel line codes adapted to the error statistics of the diffusion-based channel are proposed. Moreover, the turbo principle is introduced into the field of molecular communication, where extrinsic information is exchanged iteratively between detector and decoder. By means of an extrinsic information transfer (EXIT) chart analysis, the potential of the iterative processing is shown and the communication channel capacity is computed, which represents the theoretical performance limit for the system under investigation. In addition, the construction of an irregular convolutional code (IRCC) using the EXIT chart is presented and its performance capability is demonstrated. For the evaluation of all considered transmission algorithms the bit error rate (BER) performance is chosen. The BER is determined by means of Monte Carlo simulations and for some algorithms by theoretical derivation

    Distributed Video Coding: Iterative Improvements

    Get PDF

    ADAPTIVE CHANNEL AND SOURCE CODING USING APPROXIMATE INFERENCE

    Get PDF
    Channel coding and source coding are two important problems in communications. Although both channel coding and source coding (especially, the distributed source coding (DSC)) can achieve their ultimate performance by knowing the perfect knowledge of channel noise and source correlation, respectively, such information may not be always available at the decoder side. The reasons might be because of the time−varying characteristic of some communication systems and sources themselves, respectively. In this dissertation, I mainly focus on the study of online channel noise estimation and correlation estimation by using both stochastic and deterministic approximation inferences on factor graphs.In channel coding, belief propagation (BP) is a powerful algorithm to decode low−density parity check (LDPC) codes over additive white Gaussian noise (AWGN) channels. However, the traditional BP algorithm cannot adapt efficiently to the statistical change of SNR in an AWGN channel. To solve the problem, two common workarounds in approximate inference are stochastic methods (e.g. particle filtering (PF)) and deterministic methods (e.g. expectation approximation (EP)). Generally, deterministic methods are much faster than stochastic methods. In contrast, stochastic methods are more flexible and suitable for any distribution. In this dissertation, I proposed two adaptive LDPC decoding schemes, which are able to perform online estimation of time−varying channel state information (especially signal to noise ratio (SNR)) at the bit−level by incorporating PF and EP algorithms. Through experimental results, I compare the performance between the proposed PF based and EP based approaches, which shows that the EP based approach obtains the comparable estimation accuracy with less computational complexity than the PF based method for both stationary and time−varying SNR, and enhances the BP decoding performance simultaneously. Moreover, the EP estimator shows a very fast convergence speed, and the additional computational overhead of the proposed decoder is less than 10% of the standard BP decoder.Moreover, since the close relationship between source coding and channel coding, the proposed ideas are extended to source correlation estimation. First, I study the correlation estimation problem in lossless DSC setup, where I consider both asymmetric and non−asymmetric SW coding of two binary correlated sources. The aforementioned PF and EP based approaches are extended to handle the correlation between two binary sources, where the relationship is modeled as a virtual binary symmetric channel (BSC) with a time−varying crossover probability. Besides, to handle the correlation estimation problem of Wyner−Ziv (WZ) coding, a lossy DSC setup, I design a joint bit−plane model, by which the PF based approach can be applied to tracking the correlation between non−binary sources. Through experimental results, the proposed correlation estimation approaches significantly improve the compression performance of DSC.Finally, due to the property of ultra−low encoding complexity, DSC is a promising technique for many tasks, in which the encoder has only limited computing and communication power, e.g. the space imaging systems. In this dissertation, I consider a real−world application of the proposed correlation estimation scheme on the onboard low−complexity compression of solar stereo images, since such solutions are essential to reduce onboard storage, processing, and communication resources. In this dissertation, I propose an adaptive distributed compression solution using PF that tracks the correlation, as well as performs disparity estimation, at the decoder side. The proposed algorithm istested on the stereo solar images captured by the twin satellites systemof NASA’s STEREO project. The experimental results show the significant PSNR improvement over traditional separate bit−plane decoding without dynamic correlation and disparity estimation

    ADAPTIVE AND SECURE DISTRIBUTED SOURCE CODING FOR VIDEO AND IMAGE COMPRESSION

    Get PDF
    Distributed Video Coding (DVC) is rapidly gaining popularity as a low cost, robust video coding solution, that reduces video encoding complexity. DVC is built on Distributed Source Coding (DSC) principles where correlation between sources to be compressed is exploited at the decoder side. In the case of DVC, a current frame available only at the encoder is estimated at the decoder with side information (SI) generated from other frames available at the decoder. The inter-frame correlation in DVC is then explored at the decoder based on the received syndromes of Wyner-Ziv (WZ) frame and SI frame. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations.Generally, the existing correlation estimation methods in DVC can be classified into two main types: online estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms online estimation techniques with the cost of increased decoding complexity.In order to exploit the robustness of DVC code designs, I integrate particle filtering with standard belief propagation decoding for inference on one joint factor graph to estimate correlation among source and side information. Correlation estimation is performed OTF as it is carried out jointly with decoding of the graph-based DSC code. Moreover, I demonstrate our proposed scheme within state-of-the-art DVC systems, which are transform-domain based with a feedback channel for rate adaptation. Experimental results show that our proposed system gives a significant performance improvement compared to the benchmark state-of-the-art DISCOVER codec (including correlation estimation) and the case without dynamic particle filtering tracking, due to improved knowledge of timely correlation statistics via the combination of joint bit-plane decoding and particle-based BP tracking.Although sampling (e.g., particle filtering) based OTF correlation advances performances of DVC, it also introduces significant computational overhead and results in the decoding delay of DVC. Therefore, I tackle this difficulty through a low complexity adaptive DVC scheme using the deterministic approximate inference, where correlation estimation is also performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code but with much lower complexity. The proposed adaptive DVC scheme is based on expectation propagation (EP), which generally offers better tradeoff between accuracy and complexity among different deterministic approximate inference methods. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.Finally, I extend the concept of DVC (i.e., exploring inter-frames correlation at the decoder side) to the compression of biomedical imaging data (e.g., CT sequence) in a lossless setup, where each slide of a CT sequence is analogous to a frame of video sequence. Besides compression efficiency, another important concern of biomedical imaging data is the privacy and security. Ideally, biomedical data should be kept in a secure manner (i.e. encrypted).An intuitive way is to compress the encrypted biomedical data directly. Unfortunately, traditional compression algorithms (removing redundancy through exploiting the structure of data) fail to handle encrypted data. The reason is that encrypted data appear to be random and lack the structure in the original data. The "best" practice has been compressing the data before encryption, however, this is not appropriate for privacy related scenarios (e.g., biomedical application), where one wants to process data while keeping them encrypted and safe. In this dissertation, I develop a Secure Privacy-presERving Medical Image CompRessiOn (SUPERMICRO) framework based on DSC, which makes the compression of the encrypted data possible without compromising security and compression efficiency. Our approach guarantees the data transmission and storage in a privacy-preserving manner. I tested our proposed framework on two CT image sequences and compared it with the state-of-the-art JPEG 2000 lossless compression. Experimental results demonstrated that the SUPERMICRO framework provides enhanced security and privacy protection, as well as high compression performance
    corecore