152 research outputs found

    Joint Quantizer Optimization based on Neural Quantizer for Sum-Product Decoder

    Full text link
    A low-precision analog-to-digital converter (ADC) is required to implement a frontend device of wideband digital communication systems in order to reduce its power consumption. The goal of this paper is to present a novel joint quantizer optimization method for minimizing lower-precision quantizers matched to the sum-product algorithms. The principal idea is to introduce a quantizer that includes a feed-forward neural network and the soft staircase function. Since the soft staircase function is differentiable and has non-zero gradient values everywhere, we can exploit backpropagation and a stochastic gradient descent method to train the feed-forward neural network in the quantizer. The expected loss regarding the channel input and the decoder output is minimized in a supervised training phase. The experimental results indicate that the joint quantizer optimization method successfully provides an 8-level quantizer for a low-density parity-check (LDPC) code that achieves only a 0.1-dB performance loss compared to the unquantized system.Comment: 6 page

    Theoretical Aspects of the SOM Algorithm

    Full text link
    The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several authors. In this paper, we pass in review the last results and provide some conjectures for the future work

    Ternary Neural Networks

    Get PDF

    Switchable Precision Neural Networks

    Full text link
    Instantaneous and on demand accuracy-efficiency trade-off has been recently explored in the context of neural networks slimming. In this paper, we propose a flexible quantization strategy, termed Switchable Precision neural Networks (SP-Nets), to train a shared network capable of operating at multiple quantization levels. At runtime, the network can adjust its precision on the fly according to instant memory, latency, power consumption and accuracy demands. For example, by constraining the network weights to 1-bit with switchable precision activations, our shared network spans from BinaryConnect to Binarized Neural Network, allowing to perform dot-products using only summations or bit operations. In addition, a self-distillation scheme is proposed to increase the performance of the quantized switches. We tested our approach with three different quantizers and demonstrate the performance of SP-Nets against independently trained quantized models in classification accuracy for Tiny ImageNet and ImageNet datasets using ResNet-18 and MobileNet architectures

    SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models

    Full text link
    Current speech large language models build upon discrete speech representations, which can be categorized into semantic tokens and acoustic tokens. However, existing speech tokens are not specifically designed for speech language modeling. To assess the suitability of speech tokens for building speech language models, we established the first benchmark, SLMTokBench. Our results indicate that neither semantic nor acoustic tokens are ideal for this purpose. Therefore, we propose SpeechTokenizer, a unified speech tokenizer for speech large language models. SpeechTokenizer adopts the Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of speech information hierarchically across different RVQ layers. Furthermore, We construct a Unified Speech Language Model (USLM) leveraging SpeechTokenizer. Experiments show that SpeechTokenizer performs comparably to EnCodec in speech reconstruction and demonstrates strong performance on the SLMTokBench benchmark. Also, USLM outperforms VALL-E in zero-shot Text-to-Speech tasks. Code and models are available at https://github.com/ZhangXInFD/SpeechTokenizer/.Comment: SpeechTokenizer project page is https://0nutation.github.io/SpeechTokenizer.github.io

    Task-Based Quantization with Application to MIMO Receivers

    Full text link
    Multiple-input multiple-output (MIMO) systems are required to communicate reliably at high spectral bands using a large number of antennas, while operating under strict power and cost constraints. In order to meet these constraints, future MIMO receivers are expected to operate with low resolution quantizers, namely, utilize a limited number of bits for representing their observed measurements, inherently distorting the digital representation of the acquired signals. The fact that MIMO receivers use their measurements for some task, such as symbol detection and channel estimation, other than recovering the underlying analog signal, indicates that the distortion induced by bit-constrained quantization can be reduced by designing the acquisition scheme in light of the system task, i.e., by {\em task-based quantization}. In this work we survey the theory and design approaches to task-based quantization, presenting model-aware designs as well as data-driven implementations. Then, we show how one can implement a task-based bit-constrained MIMO receiver, presenting approaches ranging from conventional hybrid receiver architectures to structures exploiting the dynamic nature of metasurface antennas. This survey narrows the gap between theoretical task-based quantization and its implementation in practice, providing concrete algorithmic and hardware design principles for realizing task-based MIMO receivers

    Fuzzy Control Strategies in Human Operator and Sport Modeling

    Full text link
    The motivation behind mathematically modeling the human operator is to help explain the response characteristics of the complex dynamical system including the human manual controller. In this paper, we present two different fuzzy logic strategies for human operator and sport modeling: fixed fuzzy-logic inference control and adaptive fuzzy-logic control, including neuro-fuzzy-fractal control. As an application of the presented fuzzy strategies, we present a fuzzy-control based tennis simulator.Comment: 25 pages, 6 figure

    A Survey on Methods and Theories of Quantized Neural Networks

    Full text link
    Deep neural networks are the state-of-the-art methods for many real-world tasks, such as computer vision, natural language processing and speech recognition. For all its popularity, deep neural networks are also criticized for consuming a lot of memory and draining battery life of devices during training and inference. This makes it hard to deploy these models on mobile or embedded devices which have tight resource constraints. Quantization is recognized as one of the most effective approaches to satisfy the extreme memory requirements that deep neural network models demand. Instead of adopting 32-bit floating point format to represent weights, quantized representations store weights using more compact formats such as integers or even binary numbers. Despite a possible degradation in predictive performance, quantization provides a potential solution to greatly reduce the model size and the energy consumption. In this survey, we give a thorough review of different aspects of quantized neural networks. Current challenges and trends of quantized neural networks are also discussed.Comment: 17 pages, 8 figure

    Recursive Network Estimation From Binary-Valued Observation Data

    Full text link
    This paper studies the problem of recursively estimating the weighted adjacency matrix of a network out of a temporal sequence of binary-valued observations. The observation sequence is generated from nonlinear networked dynamics in which agents exchange and display binary outputs. Sufficient conditions are given to ensure stability of the observation sequence and identifiability of the system parameters. It is shown that stability and identifiability can be guaranteed under the assumption of independent standard Gaussian disturbances. Via a maximum likelihood approach, the estimation problem is transformed into an optimization problem, and it is verified that its solution is the true parameter vector under the independent standard Gaussian assumption. A recursive algorithm for the estimation problem is then proposed based on stochastic approximation techniques. Its strong consistency is established and convergence rate analyzed. Finally, numerical simulations are conducted to illustrate the results and to show that the proposed algorithm is insensitive to small unmodeled factors
    corecore