25 research outputs found

    Exploring the Space of Adversarial Images

    Full text link
    Adversarial examples have raised questions regarding the robustness and security of deep neural networks. In this work we formalize the problem of adversarial images given a pretrained classifier, showing that even in the linear case the resulting optimization problem is nonconvex. We generate adversarial images using shallow and deep classifiers on the MNIST and ImageNet datasets. We probe the pixel space of adversarial images using noise of varying intensity and distribution. We bring novel visualizations that showcase the phenomenon and its high variability. We show that adversarial images appear in large regions in the pixel space, but that, for the same task, a shallow classifier seems more robust to adversarial images than a deep convolutional network.Comment: Copyright 2016 IEEE. This manuscript was accepted at the IEEE International Joint Conference on Neural Networks (IJCNN) 2016. We will link the published version as soon as the DOI is availabl

    Explorando imagens adversárias em redes neurais profundas

    Get PDF
    Orientador: Eduardo Alves do Valle JuniorDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Exemplos adversários levantaram questões da robustez e segurança de redes neurais profundas. Neste trabalho nós formalizamos o problema de imagens adversárias dado um classificador pré-treinado, mostrando que mesmo em modelos lineares a otimização resultante é não-convexa. Nós geramos imagens adversárias utilizando classificadores rasos e profundos nas bases de dados de imagens MNIST e ImageNet. Nós sondamos o espaço dos pixels das imagens adversárias com ruído de intensidade e distribuição variável. Nós trazemos visualizações novas que mostram o fenômeno e sua alta variabilidade. Nós mostramos que imagens adversárias aparecem em regiões grandes no espaço de pixels, entretanto, para a mesma tarefa, um classificador raso parece mais robusto a imagens adversárias que uma rede convolucional profunda. Nós também propomos um novo ataque adversário a autoencoders variacionais. Nosso procedimento distorce uma imagem de entrada com o objetivo de confundir um autoencoder a reconstruir uma imagem alvo completamente diferente. Nós atacamos a representação interna e latente, com o objetivo de que a entrada adversária produza uma representação interna o mais similar possível da representação de uma imagem alvo. Nós verificamos que autoencoders são mais robustos ao ataque que classificadores: Apesar de alguns exemplos possuírem pequena distorção na entrada e similaridade razoável com a imagem alvo, há um compromisso quase linear entre esses objetivos. Nós demonstramos os resultados nas bases de dados MNIST e SVHN, e também testamos autoencoders determinísticos, chegando a conclusões similares em todos os casos. Finalmente, nós mostramos que o ataque adversário típico em classificadores, apesar de ser mais fácil, também apresenta uma relação proporcional entre a distorção da entrada e o erro da saída. No entanto, essa proporcionalidade é escondida pela normalização da saída, que mapeia uma camada linear em uma distribuição de probabilidadesAbstract: Adversarial examples have raised questions regarding the robustness and security of deep neural networks. In this work we formalize the problem of adversarial images given a pre-trained classifier, showing that even in the linear case the resulting optimization problem is nonconvex. We generate adversarial images using shallow and deep classifiers on the MNIST and ImageNet datasets. We probe the pixel space of adversarial images using noise of varying intensity and distribution. We bring novel visualizations that showcase the phenomenon and its high variability. We show that adversarial images appear in large regions in the pixel space, but that, for the same task, a shallow classifier seems more robust to adversarial images than a deep convolutional network. We also propose a novel adversarial attack for variational autoencoders. Our procedure distorts the input image to mislead the autoencoder in reconstructing a completely different target image. We attack the internal latent representations, attempting to make the adversarial input produce an internal representation as similar as possible as the target's. We find that autoencoders are much more robust to the attack than classifiers: while some examples have tolerably small input distortion, and reasonable similarity to the target image, there is a quasi-linear trade-off between those aims. We report results on MNIST and SVHN datasets, and also test regular deterministic autoencoders, reaching similar conclusions in all cases. Finally, we show that the usual adversarial attack for classifiers, while being much easier, also presents a direct proportion between distortion on the input, and misdirection on the output. That proportionality however is hidden by the normalization of the output, which maps a linear layer into a probability distributionMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    The role of the interaction network in the emergence of diversity of behavior

    No full text
    How can systems in which individuals' inner workings are very similar to each other, as neural networks or ant colonies, produce so many qualitatively different behaviors, giving rise to roles and specialization? In this work, we bring new perspectives to this question by focusing on the underlying network that defines how individuals in these systems interact. We applied a genetic algorithm to optimize rules and connections of cellular automata in order to solve the density classification task, a classical problem used to study emergent behaviors in decentralized computational systems. The networks used were all generated by the introduction of shortcuts in an originally regular topology, following the Small-world model. Even though all cells follow the exact same rules, we observed the existence of different classes of cells' behaviors in the best cellular automata found D most cells were responsible for memory and others for integration of information. Through the analysis of structural measures and patterns of connections (motifs) in successful cellular automata, we observed that the distribution of shortcuts between distant regions and the speed in which a cell can gather information from different parts of the system seem to be the main factors for the specialization we observed, demonstrating how heterogeneity in a network can create heterogeneity of behavior122CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQ142118/ 2010-

    The role of the interaction network in the emergence of diversity of behavior.

    No full text
    How can systems in which individuals' inner workings are very similar to each other, as neural networks or ant colonies, produce so many qualitatively different behaviors, giving rise to roles and specialization? In this work, we bring new perspectives to this question by focusing on the underlying network that defines how individuals in these systems interact. We applied a genetic algorithm to optimize rules and connections of cellular automata in order to solve the density classification task, a classical problem used to study emergent behaviors in decentralized computational systems. The networks used were all generated by the introduction of shortcuts in an originally regular topology, following the small-world model. Even though all cells follow the exact same rules, we observed the existence of different classes of cells' behaviors in the best cellular automata found-most cells were responsible for memory and others for integration of information. Through the analysis of structural measures and patterns of connections (motifs) in successful cellular automata, we observed that the distribution of shortcuts between distant regions and the speed in which a cell can gather information from different parts of the system seem to be the main factors for the specialization we observed, demonstrating how heterogeneity in a network can create heterogeneity of behavior

    How elementary cellular automata work.

    No full text
    <p>Notice that all cells follow the same set of rules. The state of each cell is black (0) or white (1). The cellular automaton is uniform in the sense that all cells follow the same rule table to update their states.</p

    Evolution of the Gini coefficient.

    No full text
    <p>Evolution of median inequality of frequencies that cells in each CA acted as limits during the search with different rewiring probabilities <i>p</i>.</p

    Distribution of spearman’s rank correlation, calculated for each CA, between the frequency a cell acts as a limit and the cell’s structural metrics.

    No full text
    <p>The black vertical line indicates median value and the red vertical lines indicate, respectively, the 10th and the 90th percentiles (<i>N</i> = 51000).</p

    Graphs generated using the small-world model with different rewiring probabilities <i>p</i>.

    No full text
    <p><b>(A)</b><i>p</i> = 0.00, <b>(B)</b> <i>p</i> = 0.15 and <b>(C)</b> <i>p</i> = 1.00.</p
    corecore