1,046 research outputs found
On Large Deviation Property of Recurrence Times
We extend the study by Ornstein and Weiss on the asymptotic behavior of the
normalized version of recurrence times and establish the large deviation
property for a certain class of mixing processes. Further, an estimator for
entropy based on recurrence times is proposed for which large deviation
behavior is proved for stationary and ergodic sources satisfying similar mixing
conditions.Comment: 5 pages, International Symposium on Information Theory 201
On Match Lengths, Zero Entropy and Large Deviations - with Application to Sliding Window Lempel-Ziv Algorithm
The Sliding Window Lempel-Ziv (SWLZ) algorithm that makes use of recurrence
times and match lengths has been studied from various perspectives in
information theory literature. In this paper, we undertake a finer study of
these quantities under two different scenarios, i) \emph{zero entropy} sources
that are characterized by strong long-term memory, and ii) the processes with
weak memory as described through various mixing conditions.
For zero entropy sources, a general statement on match length is obtained. It
is used in the proof of almost sure optimality of Fixed Shift Variant of
Lempel-Ziv (FSLZ) and SWLZ algorithms given in literature. Through an example
of stationary and ergodic processes generated by an irrational rotation we
establish that for a window of size , a compression ratio given by
where depends on and approaches 1 as
, is obtained under the application of FSLZ and SWLZ
algorithms. Also, we give a general expression for the compression ratio for a
class of stationary and ergodic processes with zero entropy.
Next, we extend the study of Ornstein and Weiss on the asymptotic behavior of
the \emph{normalized} version of recurrence times and establish the \emph{large
deviation property} (LDP) for a class of mixing processes. Also, an estimator
of entropy based on recurrence times is proposed for which large deviation
principle is proved for sources satisfying similar mixing conditions.Comment: accepted to appear in IEEE Transactions on Information Theor
An analysis of endogenous sunk cost competition in the banking industry
Banks play a critical role in providing liquidity to an economy by transforming small deposits into large loans. Due to the importance of the banking system, bank performance has been an area of keen interest for regulators. Traditionally, regulators saw competition in the banking sector as a source of excessive risk-taking, adversely impacting bank performance and threatening the systemβs stability. Consequently, regulators globally supported a concentrated banking market. However, there was a paradigm shift towards the last quarter of the 20th century. Assuming deregulation will compete away inefficiencies stemming from concentrated market structure, regulators started desiring greater competition in the banking industry. As a result, globally, regulators undertook several measures to reduce the market power of national champions. But, contrary to conventional economic theories and regulatory expectations, concentration in most banking markets remains elevated. This situation concerns the authorities; however, the literature offers no clarity on what enables banks to forestall competition in expanding markets. The present study addresses the issue by integrating Suttonβs (1991) philosophy of endogenous sunk cost (ESC) with established theories in the banking literature. According to Sutton (1991), as the size of the market increases, incumbent firms attempt to soften competition through βa proportionate increase in fixed costβ in quality (p. 47). The author argues that fixed investments in the vertical form of product differentiation by a few large firms in an industry pushes rivals to either match the quality of their larger peers or quit the market. Consequently, as the market size expands, a few large firms incur higher ESC, discouraging new participation on the one hand and triggering consolidation on the other, resulting in a concentrated market structure. Notably, as investments in quality are a firm-specific approach to handling competition beyond the purview of regulators, banks strategically invest in ESC to configure the market structure, quashing regulatory efforts to fragment the market. In conclusion, this research addresses significant voids in the banking literature. The study reveals the importance of ESC investments in evaluating banking market competition. Additionally, the study establishes the non-monotonic relationship between IT sunk cost investments and bank profitability.The studyβs findings give banking researchers and regulators valuable direction in assessing the competition in the banking markets. Additionally, it encourages supporters of the IT productivity paradox in banking to reassess their position following the discoveries of the present study
Performance Characterization of Watson Ahumada Motion Detector Using Random Dot Rotary Motion Stimuli
The performance of Watson & Ahumada's model of human visual motion sensing is compared against human psychophysical performance. The stimulus consists of random dots undergoing rotary motion, displayed in a circular annulus. The model matches psychophysical observer performance with respect to most parameters. It is able to replicate some key psychophysical findings such as invariance of observer performance to dot density in the display, and decrease of observer performance with frame duration of the display
Coding for Optimized Writing Rate in DNA Storage
A method for encoding information in DNA sequences is described. The method is based on the precisionresolution framework, and is aimed to work in conjunction with a recently suggested terminator-free template independent DNA synthesis method. The suggested method optimizes the amount of information bits per synthesis time unit, namely, the writing rate. Additionally, the encoding scheme studied here takes into account the existence of multiple copies of the DNA sequence, which are independently distorted. Finally, quantizers for various run-length distributions are designed
Coding for Optimized Writing Rate in DNA Storage
A method for encoding information in DNA sequences is described. The method
is based on the precision-resolution framework, and is aimed to work in
conjunction with a recently suggested terminator-free template independent DNA
synthesis method. The suggested method optimizes the amount of information bits
per synthesis time unit, namely, the writing rate. Additionally, the encoding
scheme studied here takes into account the existence of multiple copies of the
DNA sequence, which are independently distorted. Finally, quantizers for
various run-length distributions are designed.Comment: To appear in ISIT 202
CodNN -- Robust Neural Networks From Coded Classification
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing
information revolution, and yet their intrinsic properties remain a mystery. In
particular, it is widely known that DNNs are highly sensitive to noise, whether
adversarial or random. This poses a fundamental challenge for hardware
implementations of DNNs, and for their deployment in critical applications such
as autonomous driving. In this paper we construct robust DNNs via error
correcting codes. By our approach, either the data or internal layers of the
DNN are coded with error correcting codes, and successful computation under
noise is guaranteed. Since DNNs can be seen as a layered concatenation of
classification tasks, our research begins with the core task of classifying
noisy coded inputs, and progresses towards robust DNNs. We focus on binary data
and linear codes. Our main result is that the prevalent parity code can
guarantee robustness for a large family of DNNs, which includes the recently
popularized binarized neural networks. Further, we show that the coded
classification problem has a deep connection to Fourier analysis of Boolean
functions. In contrast to existing solutions in the literature, our results do
not rely on altering the training process of the DNN, and provide
mathematically rigorous guarantees rather than experimental evidence.Comment: To appear in ISIT '2
- β¦