11 research outputs found
Cumulant Generating Function of Codeword Lengths in Variable-Length Lossy Compression Allowing Positive Excess Distortion Probability
This paper considers the problem of variable-length lossy source coding. The
performance criteria are the excess distortion probability and the cumulant
generating function of codeword lengths. We derive a non-asymptotic fundamental
limit of the cumulant generating function of codeword lengths allowing positive
excess distortion probability. It is shown that the achievability and converse
bounds are characterized by the R\'enyi entropy-based quantity. In the proof of
the achievability result, the explicit code construction is provided. Further,
we investigate an asymptotic single-letter characterization of the fundamental
limit for a stationary memoryless source.Comment: arXiv admin note: text overlap with arXiv:1701.0180
Complexity and second moment of the mathematical theory of communication
The performance of an error correcting code is evaluated by its block error probability, code rate, and encoding and decoding complexity. The performance of a series of codes is evaluated by, as the block lengths approach infinity, whether their block error probabilities decay to zero, whether their code rates converge to channel capacity, and whether their growth in complexities stays under control.
Over any discrete memoryless channel, I build codes such that: for one, their block error probabilities and code rates scale like random codes’; and for two, their encoding and decoding complexities scale like polar codes’. Quantitatively, for any constants π, ρ > 0 such that π+2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(−Nπ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity
O(N logN) per code block.
Over any discrete memoryless channel, I also build codes such that: for one, they achieve channel capacity rapidly; and for two, their encoding and decoding complexities outperform all known codes over non-BEC channels. Quantitatively, for any constants τ, ρ > 0 such that 2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability
exp(−(logN)τ ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity O(N log(logN)) per code block.
The two aforementioned results are built upon two pillars—a versatile framework that generates codes on the basis of channel polarization, and a calculus–probability machinery that evaluates the performances of codes.
The framework that generates codes and the machinery that evaluates codes can be extended to many other scenarios in network information theory. To name a few: lossless compression with side information, lossy compression, Slepian–Wolf problem, Wyner–Ziv Problem, multiple access channel, wiretap channel of type I, and broadcast channel. In each scenario, the adapted notions of block error probability and code rate approach their limits at the same paces as specified above
Entropy in Image Analysis II
Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
Information Theory and Machine Learning
The recent successes of machine learning, especially regarding systems based on deep neural networks, have encouraged further research activities and raised a new set of challenges in understanding and designing complex machine learning algorithms. New applications require learning algorithms to be distributed, have transferable learning results, use computation resources efficiently, convergence quickly on online settings, have performance guarantees, satisfy fairness or privacy constraints, incorporate domain knowledge on model structures, etc. A new wave of developments in statistical learning theory and information theory has set out to address these challenges. This Special Issue, "Machine Learning and Information Theory", aims to collect recent results in this direction reflecting a diverse spectrum of visions and efforts to extend conventional theories and develop analysis tools for these complex machine learning systems
Recommended from our members
FUNCTION AND DISSIPATION IN FINITE STATE AUTOMATA - FROM COMPUTING TO INTELLIGENCE AND BACK
Society has benefited from the technological revolution and the tremendous growth in computing powered by Moore\u27s law. However, we are fast approaching the ultimate physical limits in terms of both device sizes and the associated energy dissipation. It is important to characterize these limits in a physically grounded and implementation-agnostic manner, in order to capture the fundamental energy dissipation costs associated with performing computing operations with classical information in nano-scale quantum systems. It is also necessary to identify and understand the effect of quantum in-distinguishability, noise, and device variability on these dissipation limits. Identifying these parameters is crucial to designing more energy efficient computing systems moving forward. In this dissertation, we will provide a physical description of finite state automaton, an abstract tool commonly used to describe computational operations under the Referential Approach to physical information theory. We will derive the fundamental limits of dissipation associated with a state transition in deterministic and probabilistic finite state automaton, and propose efficacy measures to capture how well a particular state transition has been physically realized. We will use these dissipation bounds to understand the limits of dissipation during learning during training and testing phases in feed-forward and recurrent neural networks. This study of dissipation in neural network provides key hints at how dissipation is fundamentally intertwined with learning in physical systems. These ideas connecting energy dissipation, entropy and physical information provide the perfect toolkit to critically analyze the very foundations of computing, and our computational approaches to artificial intelligence. In the second part of this dissertation, we derive the non-equilibrium reliable low dissipation condition for predictive inference in self-organized systems. This brings together the central ideas of homeostasis, prediction and energy efficiency under a single non-equilibrium constraint. The work was further extended to study the relationship between adaptive learning and the reliable high dissipation conditions, and the exploitation-exploration trade-offs in active agents. Using these results, we will discuss the differences between observer dependent and independent computing, and propose an alternative novel descriptive framework of intelligence in physical systems using thermodynamics. This framework is called thermodynamic intelligence and will be used to guide the engineering methodologies (devices and architectures) required to implement these descriptions
STK /WST 795 Research Reports
These documents contain the honours research reports for each year for the Department of Statistics.Honours Research Reports - University of Pretoria 20XXStatisticsBSs (Hons) Mathematical Statistics, BCom (Hons) Statistics, BCom (Hons) Mathematical StatisticsUnrestricte