5,204 research outputs found

    Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

    Full text link
    Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.Comment: To appear at CVPR 201

    Uncertainties in the Algorithmic Image

    Get PDF
    The incorporation of algorithmic procedures into the automation of image production has been gradual, but has reached critical mass over the past century, especially with the advent of photography, the introduction of digital computers and the use of artificial intelligence (AI) and machine learning (ML). Due to the increasingly significant influence algorithmic processes have on visual media, there has been an expansion of the possibilities as to how images may behave, and a consequent struggle to define them. This algorithmic turnhighlights inner tensions within existing notions of the image, namely raising questions regarding the autonomy of machines, author- and viewer- ship, and the veracity of representations. In this sense, algorithmic images hover uncertainly between human and machine as producers and interpreters of visual information, between representational and non-representational, and between visible surface and the processes behind it. This paper gives an introduction to fundamental internal discrepancies which arise within algorithmically produced images, examined through a selection of relevant artistic examples. Focusing on the theme of uncertainty, this investigation considers how algorithmic images contain aspects which conflict with the certitude of computation, and how this contributes to a difficulty in defining images

    Negative Results in Computer Vision: A Perspective

    Full text link
    A negative result is when the outcome of an experiment or a model is not what is expected or when a hypothesis does not hold. Despite being often overlooked in the scientific community, negative results are results and they carry value. While this topic has been extensively discussed in other fields such as social sciences and biosciences, less attention has been paid to it in the computer vision community. The unique characteristics of computer vision, particularly its experimental aspect, call for a special treatment of this matter. In this paper, I will address what makes negative results important, how they should be disseminated and incentivized, and what lessons can be learned from cognitive vision research in this regard. Further, I will discuss issues such as computer vision and human vision interaction, experimental design and statistical hypothesis testing, explanatory versus predictive modeling, performance evaluation, model comparison, as well as computer vision research culture

    Imagined Hierarchies as Conditionals of Gender in Aesthetics

    Get PDF
    The attributes of gender in the media are disputable. This can be explained by a conflict generated by culturally acquired alternative imagined hierarchies which are not compatible or may be even contradictory. This article is a philosophical enquiry that examines the representation of gender and the environment in which it is conditioned
    corecore