1,688 research outputs found

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Wyner-Ziv coding based on TCQ and LDPC codes and extensions to multiterminal source coding

    Get PDF
    Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. In this thesis, we first design a practical coding scheme for the quadratic Gaussian Wyner-Ziv problem, because in this special case, no rate loss is suffered due to the unavailability of the side information at the encoder. In order to approach the Wyner-Ziv distortion limit D??W Z(R), the trellis coded quantization (TCQ) technique is employed to quantize the source X, and irregular LDPC code is used to implement Slepian-Wolf coding of the quantized source input Q(X) given the side information Y at the decoder. An optimal non-linear estimator is devised at the joint decoder to compute the conditional mean of the source X given the dequantized version of Q(X) and the side information Y . Assuming ideal Slepian-Wolf coding, our scheme performs only 0.2 dB away from the Wyner-Ziv limit D??W Z(R) at high rate, which mirrors the performance of entropy-coded TCQ in classic source coding. Practical designs perform 0.83 dB away from D??W Z(R) at medium rates. With 2-D trellis-coded vector quantization, the performance gap to D??W Z(R) is only 0.66 dB at 1.0 b/s and 0.47 dB at 3.3 b/s. We then extend the proposed Wyner-Ziv coding scheme to the quadratic Gaussian multiterminal source coding problem with two encoders. Both direct and indirect settings of multiterminal source coding are considered. An asymmetric code design containing one classical source coding component and one Wyner-Ziv coding component is first introduced and shown to be able to approach the corner points on the theoretically achievable limits in both settings. To approach any point on the theoretically achievable limits, a second approach based on source splitting is then described. One classical source coding component, two Wyner-Ziv coding components, and a linear estimator are employed in this design. Proofs are provided to show the achievability of any point on the theoretical limits in both settings by assuming that both the source coding and the Wyner-Ziv coding components are optimal. The performance of practical schemes is only 0.15 b/s away from the theoretical limits for the asymmetric approach, and up to 0.30 b/s away from the limits for the source splitting approach

    Multiscale Methods for Random Composite Materials

    Get PDF
    Simulation of material behaviour is not only a vital tool in accelerating product development and increasing design efficiency but also in advancing our fundamental understanding of materials. While homogeneous, isotropic materials are often simple to simulate, advanced, anisotropic materials pose a more sizeable challenge. In simulating entire composite components such as a 25m aircraft wing made by stacking several 0.25mm thick plies, finite element models typically exceed millions or even a billion unknowns. This problem is exacerbated by the inclusion of sub-millimeter manufacturing defects for two reasons. Firstly, a finer resolution is required which makes the problem larger. Secondly, defects introduce randomness. Traditionally, this randomness or uncertainty has been quantified heuristically since commercial codes are largely unsuccessful in solving problems of this size. This thesis develops a rigorous uncertainty quantification (UQ) framework permitted by a state of the art finite element package \texttt{dune-composites}, also developed here, designed for but not limited to composite applications. A key feature of this open-source package is a robust, parallel and scalable preconditioner \texttt{GenEO}, that guarantees constant iteration counts independent of problem size. It boasts near perfect scaling properties in both, a strong and a weak sense on over 15,00015,000 cores. It is numerically verified by solving industrially motivated problems containing upwards of 200 million unknowns. Equipped with the capability of solving expensive models, a novel stochastic framework is developed to quantify variability in part performance arising from localized out-of-plane defects. Theoretical part strength is determined for independent samples drawn from a distribution inferred from B-scans of wrinkles. Supported by literature, the results indicate a strong dependence between maximum misalignment angle and strength knockdown based on which an engineering model is presented to allow rapid estimation of residual strength bypassing expensive simulations. The engineering model itself is built from a large set of simulations of residual strength, each of which is computed using the following two step approach. First, a novel parametric representation of wrinkles is developed where the spread of parameters defines the wrinkle distribution. Second, expensive forward models are only solved for independent wrinkles using \texttt{dune-composites}. Besides scalability the other key feature of \texttt{dune-composites}, the \texttt{GenEO} coarse space, doubles as an excellent multiscale basis which is exploited to build high quality reduced order models that are orders of magnitude smaller. This is important because it enables multiple coarse solves for the cost of one fine solve. In an MCMC framework, where many solves are wasted in arriving at the next independent sample, this is a sought after quality because it greatly increases effective sample size for a fixed computational budget thus providing a route to high-fidelity UQ. This thesis exploits both, new solvers and multiscale methods developed here to design an efficient Bayesian framework to carry out previously intractable (large scale) simulations calibrated by experimental data. These new capabilities provide the basis for future work on modelling random heterogeneous materials while also offering the scope for building virtual test programs including nonlinear analyses, all of which can be implemented within a probabilistic setting

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    ISCR Annual Report: Fical Year 2004

    Full text link

    ICASE

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science

    Low-Density Parity-Check Coded High-order Modulation Schemes

    Full text link
    In this thesis, we investigate how to support reliable data transmissions at high speeds in future communication systems, such as 5G/6G, WiFi, satellite, and optical communications. One of the most fundamental problems in these communication systems is how to reliably transmit information with a limited number of resources, such as power and spectral. To obtain high spectral efficiency, we use coded modulation (CM), such as bit-interleaved coded modulation (BICM) and delayed BICM (DBICM). To be specific, BICM is a pragmatic implementation of CM which has been largely adopted in both industry and academia. While BICM approaches CM capacity at high rates, the capacity gap between BICM and CM is still noticeable at lower code rates. To tackle this problem, DBICM, as a variation of BICM, introduces a delay module to create a dependency between multiple codewords, which enables us to exploit extrinsic information from the decoded delayed sub-blocks to improve the detection of the undelayed sub-blocks. Recent work shows that DBICM improves capacity over BICM. In addition, BICM and DBICM schemes protect each bit-channel differently, which is often referred to as the unequal error protection (UEP) property. Therefore, bit mapping designs are important for constructing pragmatic BICM and DBICM. To provide reliable communication, we have jointly designed bit mappings in DBICM and irregular low-density parity-check (LDPC) codes. For practical considerations, spatially coupled LDPC (SC-LDPC) codes have been considered as well. Specifically, we have investigated the joint design of the multi-chain SC-LDPC and the BICM bit mapper. In addition, the design of SC-LDPC codes with improved decoding threshold performance and reduced rate loss has been investigated in this thesis as well. The main body of this thesis consists of three parts. In the first part, considering Gray-labeled square M-ary quadrature amplitude modulation (QAM) constellations, we investigate the optimal delay scheme with the largest spectrum efficiency of DBICM for a fixed maximum number of delayed time slots and a given signal-to-noise ratio. Furthermore, we jointly optimize degree distributions and channel assignments of LDPC codes using protograph-based extrinsic information transfer charts. In addition, we proposed a constrained progressive edge growth-like algorithm to jointly construct LDPC codes and bit mappings for DBICM, taking the capacity of each bit-channel into account. Simulation results demonstrate that the designed LDPC-coded DBICM systems significantly outperform LDPC-coded BICM systems. In the second part, we proposed a windowed decoding algorithm for DBICM, which uses the extrinsic information of both the decoded delayed and undelayed sub-blocks, to improve the detection for all sub-blocks. We show that the proposed windowed decoding significantly outperforms the original decoding, demonstrating the effectiveness of the proposed decoding algorithm. In the third part, we apply multi-chain SC-LDPC to BICM. We investigate various connections for multi-chain SC-LDPC codes and bit mapping designs and analyze the performance of the multi-chain SC-LDPC codes over the equivalent binary erasure channels via density evolution. Numerical results demonstrate the superiority of the proposed design over existing connected-chain ensembles and over single-chain ensembles with the existing bit mapping design
    corecore