72 research outputs found
Coding against a Limited-view Adversary: The Effect of Causality and Feedback
We consider the problem of communication over a multi-path network in the
presence of a causal adversary. The limited-view causal adversary is able to
eavesdrop on a subset of links and also jam on a potentially overlapping subset
of links based on the current and past information. To ensure that the
communication takes place reliably and secretly, resilient network codes with
necessary redundancy are needed. We study two adversarial models - additive and
overwrite jamming and we optionally assume passive feedback from decoder to
encoder, i.e., the encoder sees everything that the decoder sees. The problem
assumes transmissions are in the large alphabet regime. For both jamming
models, we find the capacity under four scenarios - reliability without
feedback, reliability and secrecy without feedback, reliability with passive
feedback, reliability and secrecy with passive feedback. We observe that, in
comparison to the non-causal setting, the capacity with a causal adversary is
strictly increased for a wide variety of parameter settings and present our
intuition through several examples.Comment: 15 page
Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems
As one of the central tasks in machine learning, regression finds lots of
applications in different fields. An existing common practice for solving
regression problems is the mean square error (MSE) minimization approach or its
regularized variants which require prior knowledge about the models. Recently,
Yi et al., proposed a mutual information based supervised learning framework
where they introduced a label entropy regularization which does not require any
prior knowledge. When applied to classification tasks and solved via a
stochastic gradient descent (SGD) optimization algorithm, their approach
achieved significant improvement over the commonly used cross entropy loss and
its variants. However, they did not provide a theoretical convergence analysis
of the SGD algorithm for the proposed formulation. Besides, applying the
framework to regression tasks is nontrivial due to the potentially infinite
support set of the label. In this paper, we investigate the regression under
the mutual information based supervised learning framework. We first argue that
the MSE minimization approach is equivalent to a conditional entropy learning
problem, and then propose a mutual information learning formulation for solving
regression problems by using a reparameterization technique. For the proposed
formulation, we give the convergence analysis of the SGD algorithm for solving
it in practice. Finally, we consider a multi-output regression data model where
we derive the generalization performance lower bound in terms of the mutual
information associated with the underlying data distribution. The result shows
that the high dimensionality can be a bless instead of a curse, which is
controlled by a threshold. We hope our work will serve as a good starting point
for further research on the mutual information based regression.Comment: 28 pages, 2 figures, presubmitted to AISTATS2023 for reviewin
Low effective surface recombination in In(Ga)As/GaAs quantum dot diodes
Size dependent current-voltage measurements were performed on InGaAs quantum dot active region mesa diodes and the surface recombination velocity was extracted from current density versus perimeter/area plots using a diffusion model. An effective surface recombination value of 5.5 x 10(4) cm/s was obtained that can be reduced by more than an order of magnitude by selective oxidation of Al(0.9)Ga(0.1)As cladding layers. The values are three times smaller than those obtained for a single quantum well. The effect of p-type doping in the active region was investigated and found to increase the effective surface recombination. (C) 2011 American Institute of Physics. [doi:10.1063/1.3611387
- …