154,248 research outputs found
Attributed Network Embedding for Learning in a Dynamic Environment
Network embedding leverages the node proximity manifested to learn a
low-dimensional node vector representation for each node in the network. The
learned embeddings could advance various learning tasks such as node
classification, network clustering, and link prediction. Most, if not all, of
the existing works, are overwhelmingly performed in the context of plain and
static networks. Nonetheless, in reality, network structure often evolves over
time with addition/deletion of links and nodes. Also, a vast majority of
real-world networks are associated with a rich set of node attributes, and
their attribute values are also naturally changing, with the emerging of new
content patterns and the fading of old content patterns. These changing
characteristics motivate us to seek an effective embedding representation to
capture network and attribute evolving patterns, which is of fundamental
importance for learning in a dynamic environment. To our best knowledge, we are
the first to tackle this problem with the following two challenges: (1) the
inherently correlated network and node attributes could be noisy and
incomplete, it necessitates a robust consensus representation to capture their
individual properties and correlations; (2) the embedding learning needs to be
performed in an online fashion to adapt to the changes accordingly. In this
paper, we tackle this problem by proposing a novel dynamic attributed network
embedding framework - DANE. In particular, DANE first provides an offline
method for a consensus embedding and then leverages matrix perturbation theory
to maintain the freshness of the end embedding results in an online manner. We
perform extensive experiments on both synthetic and real attributed networks to
corroborate the effectiveness and efficiency of the proposed framework.Comment: 10 page
Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments
The feasibility of deep neural networks (DNNs) to address data stream
problems still requires intensive study because of the static and offline
nature of conventional deep learning approaches. A deep continual learning
algorithm, namely autonomous deep learning (ADL), is proposed in this paper.
Unlike traditional deep learning methods, ADL features a flexible structure
where its network structure can be constructed from scratch with the absence of
an initial network structure via the self-constructing network structure. ADL
specifically addresses catastrophic forgetting by having a different-depth
structure which is capable of achieving a trade-off between plasticity and
stability. Network significance (NS) formula is proposed to drive the hidden
nodes growing and pruning mechanism. Drift detection scenario (DDS) is put
forward to signal distributional changes in data streams which induce the
creation of a new hidden layer. The maximum information compression index
(MICI) method plays an important role as a complexity reduction module
eliminating redundant layers. The efficacy of ADL is numerically validated
under the prequential test-then-train procedure in lifelong environments using
nine popular data stream problems. The numerical results demonstrate that ADL
consistently outperforms recent continual learning methods while characterizing
the automatic construction of network structures
Dynamic Steerable Blocks in Deep Residual Networks
Filters in convolutional networks are typically parameterized in a pixel
basis, that does not take prior knowledge about the visual world into account.
We investigate the generalized notion of frames designed with image properties
in mind, as alternatives to this parametrization. We show that frame-based
ResNets and Densenets can improve performance on Cifar-10+ consistently, while
having additional pleasant properties like steerability. By exploiting these
transformation properties explicitly, we arrive at dynamic steerable blocks.
They are an extension of residual blocks, that are able to seamlessly transform
filters under pre-defined transformations, conditioned on the input at training
and inference time. Dynamic steerable blocks learn the degree of invariance
from data and locally adapt filters, allowing them to apply a different
geometrical variant of the same filter to each location of the feature map.
When evaluated on the Berkeley Segmentation contour detection dataset, our
approach outperforms all competing approaches that do not utilize pre-training.
Our results highlight the benefits of image-based regularization to deep
networks
Recurrent Models of Visual Attention
Applying convolutional neural networks to large images is computationally
expensive because the amount of computation scales linearly with the number of
image pixels. We present a novel recurrent neural network model that is capable
of extracting information from an image or video by adaptively selecting a
sequence of regions or locations and only processing the selected regions at
high resolution. Like convolutional neural networks, the proposed model has a
degree of translation invariance built-in, but the amount of computation it
performs can be controlled independently of the input image size. While the
model is non-differentiable, it can be trained using reinforcement learning
methods to learn task-specific policies. We evaluate our model on several image
classification tasks, where it significantly outperforms a convolutional neural
network baseline on cluttered images, and on a dynamic visual control problem,
where it learns to track a simple object without an explicit training signal
for doing so
- …