152 research outputs found
Joint Quantizer Optimization based on Neural Quantizer for Sum-Product Decoder
A low-precision analog-to-digital converter (ADC) is required to implement a
frontend device of wideband digital communication systems in order to reduce
its power consumption. The goal of this paper is to present a novel joint
quantizer optimization method for minimizing lower-precision quantizers matched
to the sum-product algorithms. The principal idea is to introduce a quantizer
that includes a feed-forward neural network and the soft staircase function.
Since the soft staircase function is differentiable and has non-zero gradient
values everywhere, we can exploit backpropagation and a stochastic gradient
descent method to train the feed-forward neural network in the quantizer. The
expected loss regarding the channel input and the decoder output is minimized
in a supervised training phase. The experimental results indicate that the
joint quantizer optimization method successfully provides an 8-level quantizer
for a low-density parity-check (LDPC) code that achieves only a 0.1-dB
performance loss compared to the unquantized system.Comment: 6 page
Recommended from our members
Quantized H-Infinity control for nonlinear stochastic time-delay systems with missing measurements
This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 IEEEIn this paper, the quantized H∞ control problem is investigated for a class of nonlinear stochastic time-delay network-based systems with probabilistic data missing. A nonlinear stochastic system with state delays is employed to model the networked control systems where the measured output and the input signals are quantized by two logarithmic quantizers, respectively. Moreover, the data missing phenomena are modeled by introducing a diagonal matrix composed of Bernoulli distributed stochastic variables taking values of 1 and 0, which describes that the data from different sensors may be lost with different missing probabilities. Subsequently, a sufficient condition is first derived in virtue of the method of sector-bounded uncertainties, which guarantees that the closed-loop system is stochastically stable and the controlled output satisfies H∞ performance constraint for all nonzero exogenous disturbances under the zero-initial condition. Then, the sufficient condition is decoupled into some inequalities for the convenience of practical verification. Based on that, quantized H∞ controllers are designed successfully for some special classes of nonlinear stochastic time-delay systems by using Matlab linear matrix inequality toolbox. Finally, a numerical simulation example is exploited to show the effectiveness and applicability of the results derived.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the U.K. under Grant GR/S27658/01, the Leverhulme Trust of the U.K., the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 61028008, 61134009, 61104125, 60974030, and 61074016, and the Alexander von Humboldt Foundation of Germany
Theoretical Aspects of the SOM Algorithm
The SOM algorithm is very astonishing. On the one hand, it is very simple to
write down and to simulate, its practical properties are clear and easy to
observe. But, on the other hand, its theoretical properties still remain
without proof in the general case, despite the great efforts of several
authors. In this paper, we pass in review the last results and provide some
conjectures for the future work
Switchable Precision Neural Networks
Instantaneous and on demand accuracy-efficiency trade-off has been recently
explored in the context of neural networks slimming. In this paper, we propose
a flexible quantization strategy, termed Switchable Precision neural Networks
(SP-Nets), to train a shared network capable of operating at multiple
quantization levels. At runtime, the network can adjust its precision on the
fly according to instant memory, latency, power consumption and accuracy
demands. For example, by constraining the network weights to 1-bit with
switchable precision activations, our shared network spans from BinaryConnect
to Binarized Neural Network, allowing to perform dot-products using only
summations or bit operations. In addition, a self-distillation scheme is
proposed to increase the performance of the quantized switches. We tested our
approach with three different quantizers and demonstrate the performance of
SP-Nets against independently trained quantized models in classification
accuracy for Tiny ImageNet and ImageNet datasets using ResNet-18 and MobileNet
architectures
SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
Current speech large language models build upon discrete speech
representations, which can be categorized into semantic tokens and acoustic
tokens. However, existing speech tokens are not specifically designed for
speech language modeling. To assess the suitability of speech tokens for
building speech language models, we established the first benchmark,
SLMTokBench. Our results indicate that neither semantic nor acoustic tokens are
ideal for this purpose. Therefore, we propose SpeechTokenizer, a unified speech
tokenizer for speech large language models. SpeechTokenizer adopts the
Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying
semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of
speech information hierarchically across different RVQ layers. Furthermore, We
construct a Unified Speech Language Model (USLM) leveraging SpeechTokenizer.
Experiments show that SpeechTokenizer performs comparably to EnCodec in speech
reconstruction and demonstrates strong performance on the SLMTokBench
benchmark. Also, USLM outperforms VALL-E in zero-shot Text-to-Speech tasks.
Code and models are available at
https://github.com/ZhangXInFD/SpeechTokenizer/.Comment: SpeechTokenizer project page is
https://0nutation.github.io/SpeechTokenizer.github.io
Task-Based Quantization with Application to MIMO Receivers
Multiple-input multiple-output (MIMO) systems are required to communicate
reliably at high spectral bands using a large number of antennas, while
operating under strict power and cost constraints. In order to meet these
constraints, future MIMO receivers are expected to operate with low resolution
quantizers, namely, utilize a limited number of bits for representing their
observed measurements, inherently distorting the digital representation of the
acquired signals. The fact that MIMO receivers use their measurements for some
task, such as symbol detection and channel estimation, other than recovering
the underlying analog signal, indicates that the distortion induced by
bit-constrained quantization can be reduced by designing the acquisition scheme
in light of the system task, i.e., by {\em task-based quantization}. In this
work we survey the theory and design approaches to task-based quantization,
presenting model-aware designs as well as data-driven implementations. Then, we
show how one can implement a task-based bit-constrained MIMO receiver,
presenting approaches ranging from conventional hybrid receiver architectures
to structures exploiting the dynamic nature of metasurface antennas. This
survey narrows the gap between theoretical task-based quantization and its
implementation in practice, providing concrete algorithmic and hardware design
principles for realizing task-based MIMO receivers
Fuzzy Control Strategies in Human Operator and Sport Modeling
The motivation behind mathematically modeling the human operator is to help
explain the response characteristics of the complex dynamical system including
the human manual controller. In this paper, we present two different fuzzy
logic strategies for human operator and sport modeling: fixed fuzzy-logic
inference control and adaptive fuzzy-logic control, including
neuro-fuzzy-fractal control. As an application of the presented fuzzy
strategies, we present a fuzzy-control based tennis simulator.Comment: 25 pages, 6 figure
A Survey on Methods and Theories of Quantized Neural Networks
Deep neural networks are the state-of-the-art methods for many real-world
tasks, such as computer vision, natural language processing and speech
recognition. For all its popularity, deep neural networks are also criticized
for consuming a lot of memory and draining battery life of devices during
training and inference. This makes it hard to deploy these models on mobile or
embedded devices which have tight resource constraints. Quantization is
recognized as one of the most effective approaches to satisfy the extreme
memory requirements that deep neural network models demand. Instead of adopting
32-bit floating point format to represent weights, quantized representations
store weights using more compact formats such as integers or even binary
numbers. Despite a possible degradation in predictive performance, quantization
provides a potential solution to greatly reduce the model size and the energy
consumption. In this survey, we give a thorough review of different aspects of
quantized neural networks. Current challenges and trends of quantized neural
networks are also discussed.Comment: 17 pages, 8 figure
Recursive Network Estimation From Binary-Valued Observation Data
This paper studies the problem of recursively estimating the weighted
adjacency matrix of a network out of a temporal sequence of binary-valued
observations. The observation sequence is generated from nonlinear networked
dynamics in which agents exchange and display binary outputs. Sufficient
conditions are given to ensure stability of the observation sequence and
identifiability of the system parameters. It is shown that stability and
identifiability can be guaranteed under the assumption of independent standard
Gaussian disturbances. Via a maximum likelihood approach, the estimation
problem is transformed into an optimization problem, and it is verified that
its solution is the true parameter vector under the independent standard
Gaussian assumption. A recursive algorithm for the estimation problem is then
proposed based on stochastic approximation techniques. Its strong consistency
is established and convergence rate analyzed. Finally, numerical simulations
are conducted to illustrate the results and to show that the proposed algorithm
is insensitive to small unmodeled factors
- …