405 research outputs found

    Modelling the forming mechanics of engineering fabrics using a mutually constrained pantographic beam and membrane mesh

    Get PDF
    A method of combining 1-d and 2-d structural finite elements to capture the fundamental mechanical properties of engineering fabrics subject to finite strains is introduced. A mutually constrained pantographic beam and membrane mesh is presented and simple homogenisation theory is developed to relate the macro-scale properties of the mesh to the properties of the elements within the mesh. The theory shows that each of the macro-scale properties of the mesh can be independently controlled. An investigation into the performance of the technique is conducted using tensile, cantilever bending and uniaxial bias extension shear simulations. The simulations are first used to verify the accuracy of the homogenisation theory and then used to demonstrate the ability of the modelling approach in accurately predicting the shear force, shear kinematics and out-of-plane wrinkling behaviour of engineering fabrics

    Constellation Shaping for WDM systems using 256QAM/1024QAM with Probabilistic Optimization

    Get PDF
    In this paper, probabilistic shaping is numerically and experimentally investigated for increasing the transmission reach of wavelength division multiplexed (WDM) optical communication system employing quadrature amplitude modulation (QAM). An optimized probability mass function (PMF) of the QAM symbols is first found from a modified Blahut-Arimoto algorithm for the optical channel. A turbo coded bit interleaved coded modulation system is then applied, which relies on many-to-one labeling to achieve the desired PMF, thereby achieving shaping gain. Pilot symbols at rate at most 2% are used for synchronization and equalization, making it possible to receive input constellations as large as 1024QAM. The system is evaluated experimentally on a 10 GBaud, 5 channels WDM setup. The maximum system reach is increased w.r.t. standard 1024QAM by 20% at input data rate of 4.65 bits/symbol and up to 75% at 5.46 bits/symbol. It is shown that rate adaptation does not require changing of the modulation format. The performance of the proposed 1024QAM shaped system is validated on all 5 channels of the WDM signal for selected distances and rates. Finally, it was shown via EXIT charts and BER analysis that iterative demapping, while generally beneficial to the system, is not a requirement for achieving the shaping gain.Comment: 10 pages, 12 figures, Journal of Lightwave Technology, 201

    Distributed signal processing using nested lattice codes

    No full text
    Multi-Terminal Source Coding (MTSC) addresses the problem of compressing correlated sources without communication links among them. In this thesis, the constructive approach of this problem is considered in an algebraic framework and a system design is provided that can be applicable in a variety of settings. Wyner-Ziv problem is first investigated: coding of an independent and identically distributed (i.i.d.) Gaussian source with side information available only at the decoder in the form of a noisy version of the source to be encoded. Theoretical models are first established and derived for calculating distortion-rate functions. Then a few novel practical code implementations are proposed by using the strategy of multi-dimensional nested lattice/trellis coding. By investigating various lattices in the dimensions considered, analysis is given on how lattice properties affect performance. Also proposed are methods on choosing good sublattices in multiple dimensions. By introducing scaling factors, the relationship between distortion and scaling factor is examined for various rates. The best high-dimensional lattice using our scale-rotate method can achieve a performance less than 1 dB at low rates from the Wyner-Ziv limit; and random nested ensembles can achieve a 1.87 dB gap with the limit. Moreover, the code design is extended to incorporate with distributed compressive sensing (DCS). Theoretical framework is proposed and practical design using nested lattice/trellis is presented for various scenarios. By using nested trellis, the simulation shows a 3.42 dB gap from our derived bound for the DCS plus Wyner-Ziv framework

    Proceedings of the Fall 1995 Advanced Digital Communication Systems

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Dry Textile Forming Simulations: A Benchmarking Exercise

    Get PDF

    CHANNEL CODING TECHNIQUES FOR A MULTIPLE TRACK DIGITAL MAGNETIC RECORDING SYSTEM

    Get PDF
    In magnetic recording greater area) bit packing densities are achieved through increasing track density by reducing space between and width of the recording tracks, and/or reducing the wavelength of the recorded information. This leads to the requirement of higher precision tape transport mechanisms and dedicated coding circuitry. A TMS320 10 digital signal processor is applied to a standard low-cost, low precision, multiple-track, compact cassette tape recording system. Advanced signal processing and coding techniques are employed to maximise recording density and to compensate for the mechanical deficiencies of this system. Parallel software encoding/decoding algorithms have been developed for several Run-Length Limited modulation codes. The results for a peak detection system show that Bi-Phase L code can be reliably employed up to a data rate of 5kbits/second/track. Development of a second system employing a TMS32025 and sampling detection permitted the utilisation of adaptive equalisation to slim the readback pulse. Application of conventional read equalisation techniques, that oppose inter-symbol interference, resulted in a 30% increase in performance. Further investigation shows that greater linear recording densities can be achieved by employing Partial Response signalling and Maximum Likelihood Detection. Partial response signalling schemes use controlled inter-symbol interference to increase recording density at the expense of a multi-level read back waveform which results in an increased noise penalty. Maximum Likelihood Sequence detection employs soft decisions on the readback waveform to recover this loss. The associated modulation coding techniques required for optimised operation of such a system are discussed. Two-dimensional run-length-limited (d, ky) modulation codes provide a further means of increasing storage capacity in multi-track recording systems. For example the code rate of a single track run length-limited code with constraints (1, 3), such as Miller code, can be increased by over 25% when using a 4-track two-dimensional code with the same d constraint and with the k constraint satisfied across a number of parallel channels. The k constraint along an individual track, kx, can be increased without loss of clock synchronisation since the clocking information derived by frequent signal transitions can be sub-divided across a number of, y, parallel tracks in terms of a ky constraint. This permits more code words to be generated for a given (d, k) constraint in two dimensions than is possible in one dimension. This coding technique is furthered by development of a reverse enumeration scheme based on the trellis description of the (d, ky) constraints. The application of a two-dimensional code to a high linear density system employing extended class IV partial response signalling and maximum likelihood detection is proposed. Finally, additional coding constraints to improve spectral response and error performance are discussed.Hewlett Packard, Computer Peripherals Division (Bristol

    Compressed Shaping: Concept and FPGA Demonstration

    Full text link
    Probabilistic shaping (PS) has been widely studied and applied to optical fiber communications. The encoder of PS expends the number of bit slots and controls the probability distribution of channel input symbols. Not only studies focused on PS but also most works on optical fiber communications have assumed source uniformity (i.e. equal probability of marks and spaces) so far. On the other hand, the source information is in general nonuniform, unless bit-scrambling or other source coding techniques to balance the bit probability is performed. Interestingly, one can exploit the source nonuniformity to reduce the entropy of the channel input symbols with the PS encoder, which leads to smaller required signal-to-noise ratio at a given input logic rate. This benefit is equivalent to a combination of data compression and PS, and thus we call this technique compressed shaping. In this work, we explain its theoretical background in detail, and verify the concept by both numerical simulation and a field programmable gate array (FPGA) implementation of such a system. In particular, we find that compressed shaping can reduce power consumption in forward error correction decoding by up to 90% in nonuniform source cases. The additional hardware resources required for compressed shaping are not significant compared with forward error correction coding, and an error insertion test is successfully demonstrated with the FPGA.Comment: 10 pages, 12 figure
    • …
    corecore