1,977,336 research outputs found
Context Based Visual Content Verification
In this paper the intermediary visual content verification method based on
multi-level co-occurrences is studied. The co-occurrence statistics are in
general used to determine relational properties between objects based on
information collected from data. As such these measures are heavily subject to
relative number of occurrences and give only limited amount of accuracy when
predicting objects in real world. In order to improve the accuracy of this
method in the verification task, we include the context information such as
location, type of environment etc. In order to train our model we provide new
annotated dataset the Advanced Attribute VOC (AAVOC) that contains additional
properties of the image. We show that the usage of context greatly improve the
accuracy of verification with up to 16% improvement.Comment: 6 pages, 6 Figures, Published in Proceedings of the Information and
Digital Technology Conference, 201
Jointly Modeling Embedding and Translation to Bridge Video and Language
Automatically describing video content with natural language is a fundamental
challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence
dynamics, has attracted increasing attention on visual interpretation. However,
most existing approaches generate a word locally with given previous words and
the visual content, while the relationship between sentence semantics and
visual content is not holistically exploited. As a result, the generated
sentences may be contextually correct but the semantics (e.g., subjects, verbs
or objects) are not true.
This paper presents a novel unified framework, named Long Short-Term Memory
with visual-semantic Embedding (LSTM-E), which can simultaneously explore the
learning of LSTM and visual-semantic embedding. The former aims to locally
maximize the probability of generating the next word given previous words and
visual content, while the latter is to create a visual-semantic embedding space
for enforcing the relationship between the semantics of the entire sentence and
visual content. Our proposed LSTM-E consists of three components: a 2-D and/or
3-D deep convolutional neural networks for learning powerful video
representation, a deep RNN for generating sentences, and a joint embedding
model for exploring the relationships between visual content and sentence
semantics. The experiments on YouTube2Text dataset show that our proposed
LSTM-E achieves to-date the best reported performance in generating natural
sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also
demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO)
triplets to several state-of-the-art techniques
SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer
Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment
Where does brain neural activation in aesthetic responses to visual art occur? Meta-analytic evidence from neuroimaging studies
Here we aimed at finding the neural correlates of the general aspect of visual aesthetic experience (VAE) and those more strictly correlated with the content of the artworks. We applied a general activation likelihood estimation (ALE) meta-analysis to 47 fMRI experiments described in 14 published studies. We also performed four separate ALE analyses in order to identify the neural substrates of reactions to specific categories of artworks, namely portraits, representation of real-world-visual-scenes, abstract paintings, and body sculptures. The general ALE revealed that VAE relies on a bilateral network of areas, and the individual ALE analyses revealed different maximal activation for the artworks' categories as function of their content. Specifically, different content-dependent areas of the ventral visual stream are involved in VAE, but a few additional brain areas are involved as well. Thus, aesthetic-related neural responses to art recruit widely distributed networks in both hemispheres including content-dependent brain areas of the ventral visual stream. Together, the results suggest that aesthetic responses are not independent of sensory, perceptual, and cognitive processe
Multi modal multi-semantic image retrieval
PhDThe rapid growth in the volume of visual information, e.g. image, and video can
overwhelm users’ ability to find and access the specific visual information of interest
to them. In recent years, ontology knowledge-based (KB) image information retrieval
techniques have been adopted into in order to attempt to extract knowledge from these
images, enhancing the retrieval performance. A KB framework is presented to
promote semi-automatic annotation and semantic image retrieval using multimodal
cues (visual features and text captions). In addition, a hierarchical structure for the KB
allows metadata to be shared that supports multi-semantics (polysemy) for concepts.
The framework builds up an effective knowledge base pertaining to a domain specific
image collection, e.g. sports, and is able to disambiguate and assign high level
semantics to ‘unannotated’ images.
Local feature analysis of visual content, namely using Scale Invariant Feature
Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’
model (BVW) as an effective method to represent visual content information and to
enhance its classification and retrieval. Local features are more useful than global
features, e.g. colour, shape or texture, as they are invariant to image scale, orientation
and camera angle. An innovative approach is proposed for the representation,
annotation and retrieval of visual content using a hybrid technique based upon the use
of an unstructured visual word and upon a (structured) hierarchical ontology KB
model. The structural model facilitates the disambiguation of unstructured visual
words and a more effective classification of visual content, compared to a vector
space model, through exploiting local conceptual structures and their relationships.
The key contributions of this framework in using local features for image
representation include: first, a method to generate visual words using the semantic
local adaptive clustering (SLAC) algorithm which takes term weight and spatial
locations of keypoints into account. Consequently, the semantic information is
preserved. Second a technique is used to detect the domain specific ‘non-informative
visual words’ which are ineffective at representing the content of visual data and
degrade its categorisation ability. Third, a method to combine an ontology model with
xi
a visual word model to resolve synonym (visual heterogeneity) and polysemy
problems, is proposed. The experimental results show that this approach can discover
semantically meaningful visual content descriptions and recognise specific events,
e.g., sports events, depicted in images efficiently.
Since discovering the semantics of an image is an extremely challenging problem, one
promising approach to enhance visual content interpretation is to use any associated
textual information that accompanies an image, as a cue to predict the meaning of an
image, by transforming this textual information into a structured annotation for an
image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct
types of information representation and modality, there are some strong, invariant,
implicit, connections between images and any accompanying text information.
Semantic analysis of image captions can be used by image retrieval systems to
retrieve selected images more precisely. To do this, a Natural Language Processing
(NLP) is exploited firstly in order to extract concepts from image captions. Next, an
ontology-based knowledge model is deployed in order to resolve natural language
ambiguities. To deal with the accompanying text information, two methods to extract
knowledge from textual information have been proposed. First, metadata can be
extracted automatically from text captions and restructured with respect to a semantic
model. Second, the use of LSI in relation to a domain-specific ontology-based
knowledge model enables the combined framework to tolerate ambiguities and
variations (incompleteness) of metadata. The use of the ontology-based knowledge
model allows the system to find indirectly relevant concepts in image captions and
thus leverage these to represent the semantics of images at a higher level.
Experimental results show that the proposed framework significantly enhances image
retrieval and leads to narrowing of the semantic gap between lower level machinederived
and higher level human-understandable conceptualisation
- …
