62 research outputs found
Self-Adaptive Hierarchical Sentence Model
The ability to accurately model a sentence at varying stages (e.g.,
word-phrase-sentence) plays a central role in natural language processing. As
an effort towards this goal we propose a self-adaptive hierarchical sentence
model (AdaSent). AdaSent effectively forms a hierarchy of representations from
words to phrases and then to sentences through recursive gated local
composition of adjacent segments. We design a competitive mechanism (through
gating networks) to allow the representations of the same sentence to be
engaged in a particular learning task (e.g., classification), therefore
effectively mitigating the gradient vanishing problem persistent in other
recursive models. Both qualitative and quantitative analysis shows that AdaSent
can automatically form and select the representations suitable for the task at
hand during training, yielding superior classification performance over
competitor models on 5 benchmark data sets.Comment: 8 pages, 7 figures, accepted as a full paper at IJCAI 201
DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Human face exhibits an inherent hierarchy in its representations (i.e.,
holistic facial expressions can be encoded via a set of facial action units
(AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown
great results in unsupervised extraction of hierarchical latent representations
from large amounts of image data, while being robust to noise and other
undesired artifacts. Potentially, this makes VAEs a suitable approach for
learning facial features for AU intensity estimation. Yet, most existing
VAE-based methods apply classifiers learned separately from the encoded
features. By contrast, the non-parametric (probabilistic) approaches, such as
Gaussian Processes (GPs), typically outperform their parametric counterparts,
but cannot deal easily with large amounts of data. To this end, we propose a
novel VAE semi-parametric modeling framework, named DeepCoder, which combines
the modeling power of parametric (convolutional) and nonparametric (ordinal
GPs) VAEs, for joint learning of (1) latent representations at multiple levels
in a task hierarchy1, and (2) classification of multiple ordinal outputs. We
show on benchmark datasets for AU intensity estimation that the proposed
DeepCoder outperforms the state-of-the-art approaches, and related VAEs and
deep learning models.Comment: ICCV 2017 - accepte
Learning Finer-class Networks for Universal Representations
Many real-world visual recognition use-cases can not directly benefit from
state-of-the-art CNN-based approaches because of the lack of many annotated
data. The usual approach to deal with this is to transfer a representation
pre-learned on a large annotated source-task onto a target-task of interest.
This raises the question of how well the original representation is
"universal", that is to say directly adapted to many different target-tasks. To
improve such universality, the state-of-the-art consists in training networks
on a diversified source problem, that is modified either by adding generic or
specific categories to the initial set of categories. In this vein, we proposed
a method that exploits finer-classes than the most specific ones existing, for
which no annotation is available. We rely on unsupervised learning and a
bottom-up split and merge strategy. We show that our method learns more
universal representations than state-of-the-art, leading to significantly
better results on 10 target-tasks from multiple domains, using several network
architectures, either alone or combined with networks learned at a coarser
semantic level.Comment: British Machine Vision Conference (BMVC) 201
Revisiting knowledge transfer for training object class detectors
We propose to revisit knowledge transfer for training object detectors on
target classes from weakly supervised training images, helped by a set of
source classes with bounding-box annotations. We present a unified knowledge
transfer framework based on training a single neural network multi-class object
detector over all source classes, organized in a semantic hierarchy. This
generates proposals with scores at multiple levels in the hierarchy, which we
use to explore knowledge transfer over a broad range of generality, ranging
from class-specific (bicycle to motorbike) to class-generic (objectness to any
class). Experiments on the 200 object classes in the ILSVRC 2013 detection
dataset show that our technique: (1) leads to much better performance on the
target classes (70.3% CorLoc, 36.9% mAP) than a weakly supervised baseline
which uses manually engineered objectness [11] (50.5% CorLoc, 25.4% mAP). (2)
delivers target object detectors reaching 80% of the mAP of their fully
supervised counterparts. (3) outperforms the best reported transfer learning
results on this dataset (+41% CorLoc and +3% mAP over [18, 46], +16.2% mAP over
[32]). Moreover, we also carry out several across-dataset knowledge transfer
experiments [27, 24, 35] and find that (4) our technique outperforms the weakly
supervised baseline in all dataset pairs by 1.5x-1.9x, establishing its general
applicability.Comment: CVPR 1
Colour for the Advancement of Deep Learning in Computer Vision
This thesis explores several research areas for Deep Learning related to computer vision concerning colours. First, this thesis considers one of the most long standing challenges that has remained for Deep Learning which is, how can Deep Learning algorithms learn successfully without using human annotated data? To that end, this thesis examines using colours in images to learn meaningful representations of vision as a substitute for learning from hand-annotated data. Second, is another related topic to the previous, which is the application of Deep Learning to automate the complex graphics task of image colourisation, which is the process of adding colours to black and white images. Third, this thesis explores colour spaces and how the representations of colours in images affect the performance in Deep Learning models
Graph Deep Learning: Methods and Applications
The past few years have seen the growing prevalence of deep neural networks on various application domains including image processing, computer vision, speech recognition, machine translation, self-driving cars, game playing, social networks, bioinformatics, and healthcare etc. Due to the broad applications and strong performance, deep learning, a subfield of machine learning and artificial intelligence, is changing everyone\u27s life.Graph learning has been another hot field among the machine learning and data mining communities, which learns knowledge from graph-structured data. Examples of graph learning range from social network analysis such as community detection and link prediction, to relational machine learning such as knowledge graph completion and recommender systems, to mutli-graph tasks such as graph classification and graph generation etc.An emerging new field, graph deep learning, aims at applying deep learning to graphs. To deal with graph-structured data, graph neural networks (GNNs) are invented in recent years which directly take graphs as input and output graph/node representations. Although GNNs have shown superior performance than traditional methods in tasks such as semi-supervised node classification, there still exist a wide range of other important graph learning problems where either GNNs\u27 applicabilities have not been explored or GNNs only have less satisfying performance.In this dissertation, we dive deeper into the field of graph deep learning. By developing new algorithms, architectures and theories, we push graph neural networks\u27 boundaries to a much wider range of graph learning problems. The problems we have explored include: 1) graph classification; 2) medical ontology embedding; 3) link prediction; 4) recommender systems; 5) graph generation; and 6) graph structure optimization.We first focus on two graph representation learning problems: graph classification and medical ontology embedding.For graph classification, we develop a novel deep GNN architecture which aggregates node features through a novel SortPooling layer that replaces the simple summing used in previous works. We demonstrate its state-of-the-art graph classification performance on benchmark datasets. For medical ontology embedding, we propose a novel hierarchical attention propagation model, which uses attention mechanism to learn embeddings of medical concepts from hierarchically-structured medical ontologies such as ICD-9 and CCS. We validate the learned embeddings on sequential procedure/diagnosis prediction tasks with real patient data.Then we investigate GNNs\u27 potential for predicting relations, specifically link prediction and recommender systems. For link prediction, we first develop a theory unifying various traditional link prediction heuristics, and then design a framework to automatically learn suitable heuristics from a given network based on GNNs. Our model shows unprecedented strong link prediction performance, significantly outperforming all traditional methods. For recommender systems, we propose a novel graph-based matrix completion model, which uses a GNN to learn graph structure features from the bipartite graph formed by user and item interactions. Our model not only outperforms various matrix completion baselines, but also demonstrates excellent transfer learning ability -- a model trained on MovieLens can be directly used to predict Douban movie ratings with high performance.Finally, we explore GNNs\u27 applicability to graph generation and graph structure optimization. We focus on a specific type of graphs which usually carry computations on them, namely directed acyclic graphs (DAGs). We develop a variational autoencoder (VAE) for DAGs and prove that it can injectively map computations into a latent space. This injectivity allows us to perform optimization in the continuous latent space instead of the original discrete structure space. We then apply our VAE to two types of DAGs, neural network architectures and Bayesian networks. Experiments show that our model not only generates novel and valid DAGs, but also finds high-quality neural architectures and Bayesian networks through performing Bayesian optimization in its latent space
Robust Lightweight Object Detection
Object detection is a very challenging problem in computer vision and has been a prominent subject of research for nearly three decades. There has been a promising in- crease in the accuracy and performance of object detectors ever since deep convolutional networks (CNN) were introduced. CNNs can be trained on large datasets made of high resolution images without flattening them, thereby using the spatial information. Their superior learning ability also makes them ideal for image classification and object de- tection tasks. Unfortunately, this power comes at the big cost of compute and memory. For instance, the Faster R-CNN detector required 180 billion FLOPs for training, and has over 100 million parameters.
In this project, we explore the popular state-of-the-art object detectors and present their contributions and shortcomings. Then we explore the recent lightweight detectors which try to address the issue of high resource requirements by building leaner models. Building upon the contributions of the state-of-the-art object detectors, and recent de- velopments in CNN training, we propose our own lightweight detector. We proposed a novel CNN block, to improve the inter-channel dependency in feature maps, called the inter-channel dependency block (ICDB). Through experiments on benchmark datasets we demonstrated our model attains better accuracy compared to the previous methods. Three benchmarking datasets PASCAL VOC 2007, KITTI and COCO have been used to demonstrate that our model scales well to different scenarios
- …