69 research outputs found
Semantically Secure Lattice Codes for Compound MIMO Channels
We consider compound multi-input multi-output (MIMO) wiretap channels where
minimal channel state information at the transmitter (CSIT) is assumed. Code
construction is given for the special case of isotropic mutual information,
which serves as a conservative strategy for general cases. Using the flatness
factor for MIMO channels, we propose lattice codes universally achieving the
secrecy capacity of compound MIMO wiretap channels up to a constant gap
(measured in nats) that is equal to the number of transmit antennas. The
proposed approach improves upon existing works on secrecy coding for MIMO
wiretap channels from an error probability perspective, and establishes
information theoretic security (in fact semantic security). We also give an
algebraic construction to reduce the code design complexity, as well as the
decoding complexity of the legitimate receiver. Thanks to the algebraic
structures of number fields and division algebras, our code construction for
compound MIMO wiretap channels can be reduced to that for Gaussian wiretap
channels, up to some additional gap to secrecy capacity.Comment: IEEE Trans. Information Theory, to appea
Measures of Information Reflect Memorization Patterns
Neural networks are known to exploit spurious artifacts (or shortcuts) that
co-occur with a target label, exhibiting heuristic memorization. On the other
hand, networks have been shown to memorize training examples, resulting in
example-level memorization. These kinds of memorization impede generalization
of networks beyond their training distributions. Detecting such memorization
could be challenging, often requiring researchers to curate tailored test sets.
In this work, we hypothesize -- and subsequently show -- that the diversity in
the activation patterns of different neurons is reflective of model
generalization and memorization. We quantify the diversity in the neural
activations through information-theoretic measures and find support for our
hypothesis on experiments spanning several natural language and vision tasks.
Importantly, we discover that information organization points to the two forms
of memorization, even for neural activations computed on unlabelled
in-distribution examples. Lastly, we demonstrate the utility of our findings
for the problem of model selection. The associated code and other resources for
this work are available at https://rachitbansal.github.io/information-measures.Comment: 22 pages; NeurIPS 2022. Code and data at
https://rachitbansal.github.io/information-measure
- …