47,900 research outputs found
Know2Look: Commonsense Knowledge for Visual Search
With the rise in popularity of social media, images accompanied by contextual text form a huge section of the web. However, search and retrieval of documents are still largely dependent on solely textual cues. Although visual cues have started to gain focus, the imperfection in object/scene detection do not lead to significantly improved results. We hypothesize that the use of background commonsense knowledge on query terms can significantly aid in retrieval of documents with associated images. To this end we deploy three different modalities - text, visual cues, and commonsense knowledge pertaining to the query - as a recipe for efficient search and retrieval
Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering
Many vision and language tasks require commonsense reasoning beyond
data-driven image and natural language processing. Here we adopt Visual
Question Answering (VQA) as an example task, where a system is expected to
answer a question in natural language about an image. Current state-of-the-art
systems attempted to solve the task using deep neural architectures and
achieved promising performance. However, the resulting systems are generally
opaque and they struggle in understanding questions for which extra knowledge
is required. In this paper, we present an explicit reasoning layer on top of a
set of penultimate neural network based systems. The reasoning layer enables
reasoning and answering questions where additional knowledge is required, and
at the same time provides an interpretable interface to the end users.
Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based
engine to reason over a basket of inputs: visual relations, the semantic parse
of the question, and background ontological knowledge from word2vec and
ConceptNet. Experimental analysis of the answers and the key evidential
predicates generated on the VQA dataset validate our approach.Comment: 9 pages, 3 figures, AAAI 201
Semantics-based selection of everyday concepts in visual lifelogging
Concept-based indexing, based on identifying various semantic concepts appearing in multimedia, is an attractive option for multimedia retrieval and much research tries to bridge the semantic gap between the media’s low-level features and high-level semantics. Research into concept-based multimedia retrieval has generally focused on detecting concepts from high quality media such as broadcast TV or movies, but it is not well addressed in other domains like lifelogging where the original data is captured with poorer quality. We argue that in noisy domains such as lifelogging, the management of data needs to include semantic reasoning in order to deduce a set of concepts to represent lifelog content for applications like searching, browsing or summarisation. Using semantic concepts to manage lifelog data relies on the fusion of automatically-detected concepts to provide a better understanding of the lifelog data. In this paper, we investigate the selection of semantic concepts for lifelogging which includes reasoning on semantic networks using a density-based approach. In a series of experiments we compare different semantic reasoning approaches and the experimental evaluations we report on lifelog data show the efficacy of our approach
FVQA: Fact-based Visual Question Answering
Visual Question Answering (VQA) has attracted a lot of attention in both
Computer Vision and Natural Language Processing communities, not least because
it offers insight into the relationships between two important sources of
information. Current datasets, and the models built upon them, have focused on
questions which are answerable by direct analysis of the question and image
alone. The set of such questions that require no external information to answer
is interesting, but very limited. It excludes questions which require common
sense, or basic factual knowledge to answer, for example. Here we introduce
FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA
only contains questions which require external information to answer.
We thus extend a conventional visual question answering dataset, which
contains image-question-answerg triplets, through additional
image-question-answer-supporting fact tuples. The supporting fact is
represented as a structural triplet, such as .
We evaluate several baseline models on the FVQA dataset, and describe a novel
model which is capable of reasoning about an image on the basis of supporting
facts.Comment: 16 page
Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis
The availability of large-scale annotated image datasets and recent advances
in supervised deep learning methods enable the end-to-end derivation of
representative image features that can impact a variety of image analysis
problems. Such supervised approaches, however, are difficult to implement in
the medical domain where large volumes of labelled data are difficult to obtain
due to the complexity of manual annotation and inter- and intra-observer
variability in label assignment. We propose a new convolutional sparse kernel
network (CSKN), which is a hierarchical unsupervised feature learning framework
that addresses the challenge of learning representative visual features in
medical image analysis domains where there is a lack of annotated training
data. Our framework has three contributions: (i) We extend kernel learning to
identify and represent invariant features across image sub-patches in an
unsupervised manner. (ii) We initialise our kernel learning with a layer-wise
pre-training scheme that leverages the sparsity inherent in medical images to
extract initial discriminative features. (iii) We adapt a multi-scale spatial
pyramid pooling (SPP) framework to capture subtle geometric differences between
learned visual features. We evaluated our framework in medical image retrieval
and classification on three public datasets. Our results show that our CSKN had
better accuracy when compared to other conventional unsupervised methods and
comparable accuracy to methods that used state-of-the-art supervised
convolutional neural networks (CNNs). Our findings indicate that our
unsupervised CSKN provides an opportunity to leverage unannotated big data in
medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional
Sparse Kernel Network for Unsupervised Medical Image Analysis'). The
manuscript is available from following link
(https://doi.org/10.1016/j.media.2019.06.005
Subspace Alignment Based Domain Adaptation for RCNN Detector
In this paper, we propose subspace alignment based domain adaptation of the
state of the art RCNN based object detector. The aim is to be able to achieve
high quality object detection in novel, real world target scenarios without
requiring labels from the target domain. While, unsupervised domain adaptation
has been studied in the case of object classification, for object detection it
has been relatively unexplored. In subspace based domain adaptation for
objects, we need access to source and target subspaces for the bounding box
features. The absence of supervision (labels and bounding boxes are absent)
makes the task challenging. In this paper, we show that we can still adapt sub-
spaces that are localized to the object by obtaining detections from the RCNN
detector trained on source and applied on target. Then we form localized
subspaces from the detections and show that subspace alignment based adaptation
between these subspaces yields improved object detection. This evaluation is
done by considering challenging real world datasets of PASCAL VOC as source and
validation set of Microsoft COCO dataset as target for various categories.Comment: 26th British Machine Vision Conference, Swansea, U
- …