1,517 research outputs found
Confluence of Vision and Natural Language Processing for Cross-media Semantic Relations Extraction
In this dissertation, we focus on extracting and understanding semantically meaningful relationships between data items of various modalities; especially relations between images and natural language. We explore the ideas and techniques to integrate such cross-media semantic relations for machine understanding of large heterogeneous datasets, made available through the expansion of the World Wide Web. The datasets collected from social media websites, news media outlets and blogging platforms usually contain multiple modalities of data. Intelligent systems are needed to automatically make sense out of these datasets and present them in such a way that humans can find the relevant pieces of information or get a summary of the available material. Such systems have to process multiple modalities of data such as images, text, linguistic features, and structured data in reference to each other. For example, image and video search and retrieval engines are required to understand the relations between visual and textual data so that they can provide relevant answers in the form of images and videos to the users\u27 queries presented in the form of text. We emphasize the automatic extraction of semantic topics or concepts from the data available in any form such as images, free-flowing text or metadata. These semantic concepts/topics become the basis of semantic relations across heterogeneous data types, e.g., visual and textual data. A classic problem involving image-text relations is the automatic generation of textual descriptions of images. This problem is the main focus of our work. In many cases, large amount of text is associated with images. Deep exploration of linguistic features of such text is required to fully utilize the semantic information encoded in it. A news dataset involving images and news articles is an example of this scenario. We devise frameworks for automatic news image description generation based on the semantic relations of images, as well as semantic understanding of linguistic features of the news articles
Effective Adaptation in Multi-Task Co-Training for Unified Autonomous Driving
Aiming towards a holistic understanding of multiple downstream tasks
simultaneously, there is a need for extracting features with better
transferability. Though many latest self-supervised pre-training methods have
achieved impressive performance on various vision tasks under the prevailing
pretrain-finetune paradigm, their generalization capacity to multi-task
learning scenarios is yet to be explored. In this paper, we extensively
investigate the transfer performance of various types of self-supervised
methods, e.g., MoCo and SimCLR, on three downstream tasks, including semantic
segmentation, drivable area segmentation, and traffic object detection, on the
large-scale driving dataset BDD100K. We surprisingly find that their
performances are sub-optimal or even lag far behind the single-task baseline,
which may be due to the distinctions of training objectives and architectural
design lied in the pretrain-finetune paradigm. To overcome this dilemma as well
as avoid redesigning the resource-intensive pre-training stage, we propose a
simple yet effective pretrain-adapt-finetune paradigm for general multi-task
training, where the off-the-shelf pretrained models can be effectively adapted
without increasing the training overhead. During the adapt stage, we utilize
learnable multi-scale adapters to dynamically adjust the pretrained model
weights supervised by multi-task objectives while leaving the pretrained
knowledge untouched. Furthermore, we regard the vision-language pre-training
model CLIP as a strong complement to the pretrain-adapt-finetune paradigm and
propose a novel adapter named LV-Adapter, which incorporates language priors in
the multi-task model via task-specific prompting and alignment between visual
and textual features.Comment: Accepted at NeurIPS 202
Robust subspace learning for static and dynamic affect and behaviour modelling
Machine analysis of human affect and behavior in naturalistic contexts has witnessed a growing attention in the last decade from various disciplines ranging from social and cognitive sciences to machine learning and computer vision. Endowing machines with the ability to seamlessly detect, analyze, model, predict as well as simulate and synthesize manifestations of internal emotional and behavioral states in real-world data is deemed essential for the deployment of next-generation, emotionally- and socially-competent human-centered interfaces. In this thesis, we are primarily motivated by the problem of modeling, recognizing and predicting spontaneous expressions of non-verbal human affect and behavior manifested through either low-level facial attributes in static images or high-level semantic events in image sequences. Both visual data and annotations of naturalistic affect and behavior naturally contain noisy measurements of unbounded magnitude at random locations, commonly referred to as ‘outliers’. We present here machine learning methods that are robust to such gross, sparse noise. First, we deal with static analysis of face images, viewing the latter as a superposition of mutually-incoherent, low-complexity components corresponding to facial attributes, such as facial identity, expressions and activation of atomic facial muscle actions. We develop a robust, discriminant dictionary learning framework to extract these components from grossly corrupted training data and combine it with sparse representation to recognize the associated attributes. We demonstrate that our framework can jointly address interrelated classification tasks such as face and facial expression recognition. Inspired by the well-documented importance of the temporal aspect in perceiving affect and behavior, we direct the bulk of our research efforts into continuous-time modeling of dimensional affect and social behavior. Having identified a gap in the literature which is the lack of data containing annotations of social attitudes in continuous time and scale, we first curate a new audio-visual database of multi-party conversations from political debates annotated frame-by-frame in terms of real-valued conflict intensity and use it to conduct the first study on continuous-time conflict intensity estimation. Our experimental findings corroborate previous evidence indicating the inability of existing classifiers in capturing the hidden temporal structures of affective and behavioral displays. We present here a novel dynamic behavior analysis framework which models temporal dynamics in an explicit way, based on the natural assumption that continuous- time annotations of smoothly-varying affect or behavior can be viewed as outputs of a low-complexity linear dynamical system when behavioral cues (features) act as system inputs. A novel robust structured rank minimization framework is proposed to estimate the system parameters in the presence of gross corruptions and partially missing data. Experiments on prediction of dimensional conflict and affect as well as multi-object tracking from detection validate the effectiveness of our predictive framework and demonstrate that for the first time that complex human behavior and affect can be learned and predicted based on small training sets of person(s)-specific observations.Open Acces
Recommended from our members
Depth-adaptive methodologies for 3D image caregorization.
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Image classification is an active topic of computer vision research. This topic
deals with the learning of patterns in order to allow efficient classification of visual
information. However, most research efforts have focused on 2D image classification.
In recent years, advances of 3D imaging enabled the development of applications and
provided new research directions. In this thesis, we present methodologies and techniques for image classification using 3D image data. We conducted our research focusing on the attributes and
limitations of depth information regarding possible uses. This research led us to the
development of depth feature extraction methodologies that contribute to the representation
of images thus enhancing the recognition efficiency. We proposed a new
classification algorithm that adapts to the need of image representations by implementing
a scale-based decision that exploits discriminant parts of representations.
Learning from the design of image representation methods, we introduced our own
which describes each image by its depicting content providing more discriminative image
representation. We also propose a dictionary learning method that exploits the
relation of training features by assessing the similarity of features originating from
similar context regions. Finally, we present our research on deep learning algorithms
combined with data and techniques used in 3D imaging. Our novel methods provide
state-of-the-art results, thus contributing to the research of 3D image classificatio
Discovery of Visual Semantics by Unsupervised and Self-Supervised Representation Learning
The success of deep learning in computer vision is rooted in the ability of
deep networks to scale up model complexity as demanded by challenging visual
tasks. As complexity is increased, so is the need for large amounts of labeled
data to train the model. This is associated with a costly human annotation
effort. To address this concern, with the long-term goal of leveraging the
abundance of cheap unlabeled data, we explore methods of unsupervised
"pre-training." In particular, we propose to use self-supervised automatic
image colorization.
We show that traditional methods for unsupervised learning, such as
layer-wise clustering or autoencoders, remain inferior to supervised
pre-training. In search for an alternative, we develop a fully automatic image
colorization method. Our method sets a new state-of-the-art in revitalizing old
black-and-white photography, without requiring human effort or expertise.
Additionally, it gives us a method for self-supervised representation learning.
In order for the model to appropriately re-color a grayscale object, it must
first be able to identify it. This ability, learned entirely self-supervised,
can be used to improve other visual tasks, such as classification and semantic
segmentation. As a future direction for self-supervision, we investigate if
multiple proxy tasks can be combined to improve generalization. This turns out
to be a challenging open problem. We hope that our contributions to this
endeavor will provide a foundation for future efforts in making
self-supervision compete with supervised pre-training.Comment: Ph.D. thesi
- …