12 research outputs found

    Improving the Robustness of Quantized Deep Neural Networks to White-Box Attacks using Stochastic Quantization and Information-Theoretic Ensemble Training

    Full text link
    Most real-world applications that employ deep neural networks (DNNs) quantize them to low precision to reduce the compute needs. We present a method to improve the robustness of quantized DNNs to white-box adversarial attacks. We first tackle the limitation of deterministic quantization to fixed ``bins'' by introducing a differentiable Stochastic Quantizer (SQ). We explore the hypothesis that different quantizations may collectively be more robust than each quantized DNN. We formulate a training objective to encourage different quantized DNNs to learn different representations of the input image. The training objective captures diversity and accuracy via mutual information between ensemble members. Through experimentation, we demonstrate substantial improvement in robustness against LL_\infty attacks even if the attacker is allowed to backpropagate through SQ (e.g., > 50\% accuracy to PGD(5/255) on CIFAR10 without adversarial training), compared to vanilla DNNs as well as existing ensembles of quantized DNNs. We extend the method to detect attacks and generate robustness profiles in the adversarial information plane (AIP), towards a unified analysis of different threat models by correlating the MI and accuracy.Comment: 9 pages, 9 figures, 4 table

    Learning Invariant World State Representations with Predictive Coding

    Full text link
    Self-supervised learning methods overcome the key bottleneck for building more capable AI: limited availability of labeled data. However, one of the drawbacks of self-supervised architectures is that the representations that they learn are implicit and it is hard to extract meaningful information about the encoded world states, such as 3D structure of the visual scene encoded in a depth map. Moreover, in the visual domain such representations only rarely undergo evaluations that may be critical for downstream tasks, such as vision for autonomous cars. Herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception. We develop a new predictive coding-based architecture and a hybrid fully-supervised/self-supervised learning method. We propose a novel architecture that extends the predictive coding approach: PRedictive Lateral bottom-Up and top-Down Encoder-decoder Network (PreludeNet), which explicitly learns to infer and predict depth from video frames. In PreludeNet, the encoder's stack of predictive coding layers is trained in a self-supervised manner, while the predictive decoder is trained in a supervised manner to infer or predict the depth. We evaluate the robustness of our model on a new synthetic dataset, in which lighting conditions (such as overall illumination, and effect of shadows) can be be parametrically adjusted while keeping all other aspects of the world constant. PreludeNet achieves both competitive depth inference performance and next frame prediction accuracy. We also show how this new network architecture, coupled with the hybrid fully-supervised/self-supervised learning method, achieves balance between the said performance and invariance to changes in lighting. The proposed framework for evaluating visual representations can be extended to diverse task domains and invariance tests.Comment: 11 pages, 5 figures, submitte

    Functional Diversity in the Retina Improves the Population Code

    Full text link
    International audienceWithin a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Analyzing large populations of retinal ganglion cells, we found that the real, heterogeneous population can yield a discrimination error lower than the homogeneous population by several orders of magnitude and consequently can encode much more visual information. This effect increases with population size and with graded degrees of heterogeneity. We complemented these results with an analysis of coding based on the Chernoff distance, as well as derivations of inequalities on coding in certain limits, from which we can conclude that the beneficial effect of heterogeneity occurs over a broad set of conditions. Together, our results indicate that the presence of functional diversity in neural populations can enhance their coding fidelity appreciably. A noteworthy outcome of our study is that this effect can be extremely strong and should be taken into account when investigating design principles for neural circuits

    Standards rule? Regulations, literacies and algorithms in times of transition

    Get PDF
    In this panel we seek to reflect upon the theme "internet rules" by drawing on the notion of standards, developed in Science and Technology Studies. The work of Susan Leigh Star lays a foundation for considering the relationships between rules, standards and algorithms as forms of infrastructure. In the panel, we explore the production of standards as they become transparent infrastructures, heeding Star and Lampland's call to restore these standards' "historical development, their political consequences, and the smoke-filled rooms always attached to decisions made about them" (2009:13). Standards – and algorithms – are rarely queried, as they promise and embody efficiency and order. Indeed, modernity may be described as a concentrated, relentless effort to contain the accidental, the arbitrary, the residual; to categorize, order, and routinize the unexpected; and to preclude the exceptional and unpredictable (Bauman, 1991) – in a word: to standardize. As Larkin writes, it is difficult to separate an analysis of infrastructures such as standards from the modernist belief that by promoting order, "infrastructures bring about change, and through change they enact progress, and through progress we gain freedom" (2013:332). It is ironic, then, that standards are distributed unevenly across the sociocultural landscape, that they are increasingly linked and integrated with one another, and that they codify, embody or prescribe social values that often carry great consequences for individuals and groups (Star and Lampland, 2009:5). In this context, the four papers and the moderator of this panel explore the meaning of contemporary standardization practices in such diverse fields as memory applications, crowd funding, biometric identification and national archiving, and internet literacy – viewing them as empirically distinct yet theoretically interrelated attempts to impose order in times of growing uncertainly. Together, they address two tensions that inform contemporary standardization efforts, regarding standards as an encounter between analogue and digital objects and practices; and as dialectic of invisibility and transparency, a pragmatic and symbolic endeavor
    corecore