88,491 research outputs found

    Digital Image

    Full text link
    This paper considers the ontological significance of invisibility in relation to the question β€˜what is a digital image?’ Its argument in a nutshell is that the emphasis on visibility comes at the expense of latency and is symptomatic of the style of thinking that dominated Western philosophy since Plato. This privileging of visible content necessarily binds images to linguistic (semiotic and structuralist) paradigms of interpretation which promote representation, subjectivity, identity and negation over multiplicity, indeterminacy and affect. Photography is the case in point because until recently critical approaches to photography had one thing in common: they all shared in the implicit and incontrovertible understanding that photographs are a medium that must be approached visually; they took it as a given that photographs are there to be looked at and they all agreed that it is only through the practices of spectatorship that the secrets of the image can be unlocked. Whatever subsequent interpretations followed, the priori- ty of vision in relation to the image remained unperturbed. This undisputed belief in the visibility of the image has such a strong grasp on theory that it imperceptibly bonded together otherwise dissimilar and sometimes contradictory methodol- ogies, preventing them from noticing that which is the most unexplained about images: the precedence of looking itself. This self-evident truth of visibility casts a long shadow on im- age theory because it blocks the possibility of inquiring after everything that is invisible, latent and hidden

    Auto-Encoding Scene Graphs for Image Captioning

    Full text link
    We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation `person on bike', it is natural to replace `on' with `ride' and infer `person riding bike on a road' even the `road' is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph --- a directed graph (G\mathcal{G}) where an object node is connected by adjective nodes and relationship nodes --- to represent the complex structural layout of both image (I\mathcal{I}) and sentence (S\mathcal{S}). In the textual domain, we use SGAE to learn a dictionary (D\mathcal{D}) that helps to reconstruct sentences in the S→G→D→S\mathcal{S}\rightarrow \mathcal{G} \rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline, where D\mathcal{D} encodes the desired language prior; in the vision-language domain, we use the shared D\mathcal{D} to guide the encoder-decoder in the I→G→D→S\mathcal{I}\rightarrow \mathcal{G}\rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves a new state-of-the-art 127.8127.8 CIDEr-D on the Karpathy split, and a competitive 125.5125.5 CIDEr-D (c40) on the official server even compared to other ensemble models

    Webly Supervised Learning of Convolutional Networks

    Full text link
    We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era)
    • …
    corecore