1,125 research outputs found

    A General Framework for Transmission with Transceiver Distortion and Some Applications

    Full text link
    A general theoretical framework is presented for analyzing information transmission over Gaussian channels with memoryless transceiver distortion, which encompasses various nonlinear distortion models including transmit-side clipping, receive-side analog-to-digital conversion, and others. The framework is based on the so-called generalized mutual information (GMI), and the analysis in particular benefits from the setup of Gaussian codebook ensemble and nearest-neighbor decoding, for which it is established that the GMI takes a general form analogous to the channel capacity of undistorted Gaussian channels, with a reduced "effective" signal-to-noise ratio (SNR) that depends on the nominal SNR and the distortion model. When applied to specific distortion models, an array of results of engineering relevance is obtained. For channels with transmit-side distortion only, it is shown that a conventional approach, which treats the distorted signal as the sum of the original signal part and a uncorrelated distortion part, achieves the GMI. For channels with output quantization, closed-form expressions are obtained for the effective SNR and the GMI, and related optimization problems are formulated and solved for quantizer design. Finally, super-Nyquist sampling is analyzed within the general framework, and it is shown that sampling beyond the Nyquist rate increases the GMI for all SNR. For example, with a binary symmetric output quantization, information rates exceeding one bit per channel use are achievable by sampling the output at four times the Nyquist rate.Comment: 32 pages (including 4 figures, 5 tables, and auxiliary materials); submitted to IEEE Transactions on Communication

    Batch Size Influence on Performance of Graphic and Tensor Processing Units during Training and Inference Phases

    Full text link
    The impact of the maximally possible batch size (for the better runtime) on performance of graphic processing units (GPU) and tensor processing units (TPU) during training and inference phases is investigated. The numerous runs of the selected deep neural network (DNN) were performed on the standard MNIST and Fashion-MNIST datasets. The significant speedup was obtained even for extremely low-scale usage of Google TPUv2 units (8 cores only) in comparison to the quite powerful GPU NVIDIA Tesla K80 card with the speedup up to 10x for training stage (without taking into account the overheads) and speedup up to 2x for prediction stage (with and without taking into account overheads). The precise speedup values depend on the utilization level of TPUv2 units and increase with the increase of the data volume under processing, but for the datasets used in this work (MNIST and Fashion-MNIST with images of sizes 28x28) the speedup was observed for batch sizes >512 images for training phase and >40 000 images for prediction phase. It should be noted that these results were obtained without detriment to the prediction accuracy and loss that were equal for both GPU and TPU runs up to the 3rd significant digit for MNIST dataset, and up to the 2nd significant digit for Fashion-MNIST dataset.Comment: 10 pages, 7 figures, 2 table

    Ionic Kratzer bond theory and vibrational levels for achiral covalent bond HH

    Get PDF
    A dihydrogen Hamiltonian reduces to the Sommerfeld-Kratzer-potential, adapted for field quantization according to old-quantum theory. Constants omega_e, k_e and r_e needed for the H_2 vibrational system derive solely from hydrogen mass m_H. For H_2, a first principles ionic Kratzer oscillator returns the covalent bond energy within 0,08 % and all levels within 0,02 %, 30 times better than the Dunham oscillator and as accurate as early ab initio QM.Comment: 21 pages, 4 figures, 2 tables, at the institutional archive Ghent University, references and early ab initio QM results added, typo's remove
    • …
    corecore