153,676 research outputs found
M2HGCL: Multi-Scale Meta-Path Integrated Heterogeneous Graph Contrastive Learning
Inspired by the successful application of contrastive learning on graphs,
researchers attempt to impose graph contrastive learning approaches on
heterogeneous information networks. Orthogonal to homogeneous graphs, the types
of nodes and edges in heterogeneous graphs are diverse so that specialized
graph contrastive learning methods are required. Most existing methods for
heterogeneous graph contrastive learning are implemented by transforming
heterogeneous graphs into homogeneous graphs, which may lead to ramifications
that the valuable information carried by non-target nodes is undermined thereby
exacerbating the performance of contrastive learning models. Additionally,
current heterogeneous graph contrastive learning methods are mainly based on
initial meta-paths given by the dataset, yet according to our deep-going
exploration, we derive empirical conclusions: only initial meta-paths cannot
contain sufficiently discriminative information; and various types of
meta-paths can effectively promote the performance of heterogeneous graph
contrastive learning methods. To this end, we propose a new multi-scale
meta-path integrated heterogeneous graph contrastive learning (M2HGCL) model,
which discards the conventional heterogeneity-homogeneity transformation and
performs the graph contrastive learning in a joint manner. Specifically, we
expand the meta-paths and jointly aggregate the direct neighbor information,
the initial meta-path neighbor information and the expanded meta-path neighbor
information to sufficiently capture discriminative information. A specific
positive sampling strategy is further imposed to remedy the intrinsic
deficiency of contrastive learning, i.e., the hard negative sample sampling
issue. Through extensive experiments on three real-world datasets, we
demonstrate that M2HGCL outperforms the current state-of-the-art baseline
models.Comment: Accepted to the conference of ADMA2023 as an Oral presentatio
Logic Programs as Declarative and Procedural Bias in Inductive Logic Programming
Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article ``Computing Machinery and Intelligence''. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation.
We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system MC-TopLog. It overcomes the entailment-incompleteness of Progol, thus outperforms Progol in terms of predictive accuracies on learning grammars and strategies for playing Nim game. MC-TopLog has been applied to two real-world applications funded by Syngenta, which is an agriculture company.
A higher-order extension on top theories results in meta-interpreters, which allow the introduction of new predicate symbols. Thus the resulting ILP system Metagol can do predicate invention, which is an intrinsically higher-order logic operation. Metagol also leverages the procedural semantic of Prolog to encode procedural bias, so that it can outperform both its ASP version and ILP systems without an equivalent procedural bias in terms of efficiency and accuracy. This is demonstrated by the experiments on learning Regular, Context-free and Natural grammars. Metagol is also applied to non-grammar learning tasks involving recursion and predicate invention, such as learning a definition of staircases and robot strategy learning. Both MC-TopLog and Metagol are based on a -directed framework, which is different from other multi-clause learning systems based on Inverse Entailment, such as CF-Induction, XHAIL and IMPARO. Compared to another -directed multi-clause learning system TAL, Metagol allows the explicit form of higher-order assumption to be encoded in the form of meta-rules.Open Acces
Attention Graph for Multi-Robot Social Navigation with Deep Reinforcement Learning
Learning robot navigation strategies among pedestrian is crucial for domain
based applications. Combining perception, planning and prediction allows us to
model the interactions between robots and pedestrians, resulting in impressive
outcomes especially with recent approaches based on deep reinforcement learning
(RL). However, these works do not consider multi-robot scenarios. In this
paper, we present MultiSoc, a new method for learning multi-agent socially
aware navigation strategies using RL. Inspired by recent works on multi-agent
deep RL, our method leverages graph-based representation of agent interactions,
combining the positions and fields of view of entities (pedestrians and
agents). Each agent uses a model based on two Graph Neural Network combined
with attention mechanisms. First an edge-selector produces a sparse graph, then
a crowd coordinator applies node attention to produce a graph representing the
influence of each entity on the others. This is incorporated into a model-free
RL framework to learn multi-agent policies. We evaluate our approach on
simulation and provide a series of experiments in a set of various conditions
(number of agents / pedestrians). Empirical results show that our method learns
faster than social navigation deep RL mono-agent techniques, and enables
efficient multi-agent implicit coordination in challenging crowd navigation
with multiple heterogeneous humans. Furthermore, by incorporating customizable
meta-parameters, we can adjust the neighborhood density to take into account in
our navigation strategy
Exploiting Style Transfer-based Task Augmentation for Cross-Domain Few-Shot Learning
In cross-domain few-shot learning, the core issue is that the model trained
on source domains struggles to generalize to the target domain, especially when
the domain shift is large. Motivated by the observation that the domain shift
between training tasks and target tasks usually can reflect in their style
variation, we propose Task Augmented Meta-Learning (TAML) to conduct style
transfer-based task augmentation to improve the domain generalization ability.
Firstly, Multi-task Interpolation (MTI) is introduced to fuse features from
multiple tasks with different styles, which makes more diverse styles
available. Furthermore, a novel task-augmentation strategy called Multi-Task
Style Transfer (MTST) is proposed to perform style transfer on existing tasks
to learn discriminative style-independent features. We also introduce a Feature
Modulation module (FM) to add random styles and improve generalization of the
model. The proposed TAML increases the diversity of styles of training tasks,
and contributes to training a model with better domain generalization ability.
The effectiveness is demonstrated via theoretical analysis and thorough
experiments on two popular cross-domain few-shot benchmarks
Neural Interactive Collaborative Filtering
In this paper, we study collaborative filtering in an interactive setting, in
which the recommender agents iterate between making recommendations and
updating the user profile based on the interactive feedback. The most
challenging problem in this scenario is how to suggest items when the user
profile has not been well established, i.e., recommend for cold-start users or
warm-start users with taste drifting. Existing approaches either rely on overly
pessimistic linear exploration strategy or adopt meta-learning based algorithms
in a full exploitation way. In this work, to quickly catch up with the user's
interests, we propose to represent the exploration policy with a neural network
and directly learn it from the feedback data. Specifically, the exploration
policy is encoded in the weights of multi-channel stacked self-attention neural
networks and trained with efficient Q-learning by maximizing users' overall
satisfaction in the recommender systems. The key insight is that the satisfied
recommendations triggered by the exploration recommendation can be viewed as
the exploration bonus (delayed reward) for its contribution on improving the
quality of the user profile. Therefore, the proposed exploration policy, to
balance between learning the user profile and making accurate recommendations,
can be directly optimized by maximizing users' long-term satisfaction with
reinforcement learning. Extensive experiments and analysis conducted on three
benchmark collaborative filtering datasets have demonstrated the advantage of
our method over state-of-the-art methods
μ¬μΈ΅ μ κ²½λ§ κ²μ κΈ°λ²μ μ¬μ©ν μ΄λ―Έμ§ 볡μ
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2021.8. μμ€μ.Image restoration is an important technology which can be used as a pre-processing step to increase the performances of various vision tasks. Image super-resolution is one of the important task in image restoration which restores a high-resolution (HR) image from low-resolution (LR) observation. The recent progress of deep convolutional neural networks has enabled great success in single image super-resolution (SISR). its performance is also being increased by deepening the networks and developing more sophisticated network structures. However, finding an optimal structure for the given problem is a difficult task, even for human experts. For this reason, neural architecture search (NAS) methods have been introduced, which automate the procedure of constructing the structures. In this dissertation, I propose a new single image super-resolution framework by using neural architecture search (NAS) method. As the performance improves, the network becomes more complex and deeper, so I apply NAS algorithm to find the optimal network while reducing the effort in network design. In detail, the proposed scheme is summarized to three topics: image super-resolution using efficient neural architecture search, multi-branch neural architecture search for lightweight image super-resolution, and neural architecture search for image super-resolution using meta-transfer learning.
At first, I expand the NAS to the super-resolution domain and find a lightweight densely connected network named DeCoNASNet. I use a hierarchical search strategy to find the best connection with local and global features. In this process, I define a complexity-based-penalty and add it to the reward term of REINFORCE algorithm. Experiments show that my DeCoNASNet outperforms the state-of-the-art lightweight super-resolution networks designed by handcraft methods and existing NAS-based design.
I propose a new search space design with multi-branch structure to enlarge the search space for capturing multi-scale features, resulting in better reconstruction on grainy areas. I also adopt parameter sharing scheme in multi-branch network to share their information and reduce the whole network parameter. Experiments show that the proposed method finds an optimal SISR network about twenty times faster than the existing methods, while showing
comparable performance in terms of PSNR vs. parameters. Comparison of visual quality validates that the proposed SISR network reconstructs texture areas better than the previous methods because of the enlarged search space to find multi-scale features.
Lastly, I apply meta-transfer learning to the NAS procedure for image super-resolution. I train the controller and child network with the meta-learning scheme, which enables the controllers to find promising network for several scale simultaneously. Furthermore, meta-trained child network is reused as the pre-trained parameters for final evaluation phase to improve the final image super-resolution results even better and search-evaluation gap problem is efficiently reduced.μ΄λ―Έμ§ 볡μμ λ€μν μμμ²λ¦¬ λ¬Έμ μ μ±λ₯μ λμ΄κΈ° μν μ μ²λ¦¬ λ¨κ³λ‘ μ¬μ©ν μ μλ μ€μν κΈ°μ μ΄λ€. μ΄λ―Έμ§ κ³ ν΄μλνλ μ΄λ―Έμ§ 볡μλ°©λ² μ€ μ€μν λ¬Έμ μ νλλ‘μ¨ μ ν΄μλμ μ΄λ―Έμ§λ₯Ό κ³ ν΄μλμ μ΄λ―Έμ§λ‘ 볡μνλ λ°©λ²μ΄λ€. μ΅κ·Όμλ 컨λ²λ£¨μ
μ κ²½λ§ (CNN)μ μ¬μ©νλ λ₯ λ¬λ(deep learning) κΈ°λ°μ λ°©λ²λ€μ΄ λ¨μΌ μ΄λ―Έμ§ κ³ ν΄μλν (SISR) λ¬Έμ λ₯Ό νΈλλ° λ§μ΄ μ¬μ©λκ³ μλ€. μΌλ°μ μΌλ‘ μ΄λ―Έμ§ κ³ ν΄μλν μ±λ₯μ CNNμ κΉκ² μκ±°λ 볡μ‘ν ꡬ쑰λ₯Ό μ€κ³ν¨μΌλ‘μ¨ ν₯μμν¬ μ μλ€.
κ·Έλ¬λ μ£Όμ΄μ§ λ¬Έμ μ λν μ΅μ μ ꡬ쑰λ₯Ό μ°Ύλ κ²μ ν΄λΉ λΆμΌμ μ λ¬Έκ°λΌ ν μ§λΌλ μ΄λ ΅κ³ μκ°μ΄ μ€λ 걸리λ μμ
μ΄λ€. μ΄λ¬ν μ΄μ λ‘ μ κ²½λ§ κ΅¬μΆ μ μ°¨λ₯Ό μλννλ μ κ²½λ§ κ΅¬μ‘° κ²μ (NAS) λ°©λ²μ΄ λμ
λμλ€. μ΄ λ
Όλ¬Έμμλ μ κ²½λ§ κ΅¬μ‘° κ²μ (NAS) λ°©λ²μ μ¬μ©νμ¬ μλ‘μ΄ λ¨μΌ μ΄λ―Έμ§ κ³ ν΄μλν λ°©λ²μ μ μνμλ€.
μ΄ λ
Όλ¬Έμμ μ μν λ°©λ²μ ν¬κ² μΈ κ°μ§λ‘ μμ½ ν μ μλ€. μ΄λ ν¨μ¨μ μΈ μ κ²½λ§ κ²μκΈ°λ²(ENAS)μ μ΄μ©ν μ΄λ―Έμ§ κ³ ν΄μλν, λ³λ ¬ μ κ²½λ§ κ²μ κΈ°λ²μ μ΄μ©ν μ΄λ―Έμ§ κ³ ν΄μλν, λ©ν μ μ‘ νμ΅μ μ΄μ©νλ μ κ²½λ§ κ²μκΈ°λ²μ ν΅ν μ΄λ―Έμ§ κ³ ν΄μλν μ΄λ€. μ°μ , μ°λ¦¬λ μ£Όλ‘ μμ λΆλ₯μ μ°μ΄λ μ κ²½λ§ κ²μ κΈ°λ²μ μμ κ³ ν΄μλνμ μ μ©νμμΌλ©°, DeCoNASNetμ΄λΌ λͺ
λͺ
λ μ κ²½λ§ κ΅¬μ‘°λ₯Ό μ€κ³νμλ€. λν κ³μΈ΅μ κ²μ μ λ΅μ μ¬μ©νμ¬ μ§μ/μ μ νΌμ³(feature) ν©λ³μ μν μ΅μμ μ°κ²° λ°©λ²μ κ²μνμλ€. μ΄ κ³Όμ μμ νμ λ³μκ° μ μΌλ©΄μ μ’μ μ±λ₯μ λΌ μ μλλ‘ λ³΅μ‘μ± κΈ°λ° νλν° (complexity-based penalty) λ₯Ό μ μνκ³ μ΄λ₯Ό REINFORCE μκ³ λ¦¬μ¦μ 보μ μ νΈμ μΆκ°νμλ€. μ€ν κ²°κ³Ό DeCoNASNetμ κΈ°μ‘΄μ μ¬λμ΄ μ§μ μ€κ³ν μ κ²½λ§κ³Ό μ κ²½λ§ κ²μ κΈ°λ²μ κΈ°λ°μΌλ‘ μ€κ³λ μ΅κ·Όμ κ³ ν΄μλν ꡬ쑰μ μ±λ₯μ λ₯κ°νλ κ²μ νμΈ ν μ μμλ€.
μ°λ¦¬λ λν μ¬λ¬ ν¬κΈ°μ νΌμ³(feature)λ₯Ό νμ΅νκΈ° μν΄ μ κ²½λ§ κ²μ κΈ°λ²μ κ²μ 곡κ°μ νλνμ¬ λ³λ ¬ μ κ²½λ§μ μ€κ³νλ λ°©λ²μ μ μνμλ€. μ΄ λ, λ³λ ¬μ κ²½λ§μ κ° μμΉμμ λ§€κ° λ³μλ₯Ό 곡μ ν μ μλλ‘ νμ¬ λ³λ ¬μ κ²½λ§μ κ° κ΅¬μ‘°λΌλ¦¬ μ 보λ₯Ό 곡μ νκ³ μ 체 ꡬ쑰λ₯Ό μ€κ³νλλ° νμν λ§€κ° λ³μλ₯Ό μ€μ΄λλ‘ νμλ€. μ€ν κ²°κ³Ό μ μλ λ°©λ²μ ν΅ν΄ λ§€κ° λ³μ ν¬κΈ° λλΉ μ±λ₯μ΄ μ’μ μ κ²½λ§ κ΅¬μ‘°λ₯Ό μ°Ύμ μ μμλ€. μ€ν κ²°κ³Όλ₯Ό ν΅ν΄ νμ₯λ κ²μ 곡κ°μμ μ¬λ¬ ν¬κΈ°μ νΌμ³ (feature)λ₯Ό νμ΅νμκΈ° λλ¬Έμ μ΄μ λ°©λ²λ³΄λ€ 볡μ‘ν μμμ λ μ 볡μνλ κ²μ νμΈνμλ€.
λ§μ§λ§μΌλ‘ λ©ν μ μ‘ νμ΅(meta-transfer learning)μ μ κ²½λ§ κ²μμ μ μ©νμ¬ λ€μν ν¬κΈ°μ μ΄λ―Έμ§ κ³ ν΄μλν λ¬Έμ λ₯Ό ν΄κ²°νλ λ°©λ²μ μ μνμλ€. μ΄ λ
Όλ¬Έμμλ λ©ν μ μ‘ νμ΅ λ°©λ²μ ν΅ν΄ μ μ΄κΈ°κ° μ¬λ¬ ν¬κΈ°μ μ’μ μ κ²½λ§ κ΅¬μ‘°λ₯Ό λμμ μ°Ύμ μ μλλ‘ μ€κ³νμλ€. λν λ©ν νλ ¨λ μ κ²½λ§ κ΅¬μ‘°λ μ΅μ’
μ±λ₯ νκ° μ νμ΅μ μμμ μΌλ‘ μ¬μ¬μ© λμ΄ μ΅μ’
μ΄λ―Έμ§ κ³ ν΄μλν μ±λ₯μ λμ± ν₯μμν¬ μ μμμΌλ©°, ν¨κ³Όμ μΌλ‘ κ²μ-νκ° κ΄΄λ¦¬ λ¬Έμ λ₯Ό ν΄κ²°νμλ€.1 INTRODUCTION 1
1.1 contribution 3
1.2 contents 4
2 Neural Architecture Search for Image Super-Resolution Using Densely Constructed Search Space: DeCoNAS 5
2.1 Introduction 5
2.2 Proposed Method 9
2.2.1 Overall structure of DeCoNASNet 9
2.2.2 Constructing the DNB 11
2.2.3 Constructing controller for the DeCoNASNet 13
2.2.4 Training DeCoNAS and complexity-based penalty 13
2.3 Experimental results 15
2.3.1 Settings 15
2.3.2 Results 16
2.3.3 Ablation study 21
2.4 Summary 22
3 Multi-Branch Neural Architecture Search for Lightweight Image Super-resolution 23
3.1 Introduction 23
3.2 Related Work 26
3.2.1 Single image super-resolution 26
3.2.2 Neural architecture search 27
3.2.3 Image super-resolution with neural architecture search 29
3.3 Method 32
3.3.1 Overview of the Proposed MBNAS 32
3.3.2 Controller and complexity-based penalty 33
3.3.3 MBNASNet 35
3.3.4 Multi-scale block with partially shared Nodes 37
3.3.5 MBNAS 38
3.4 datasets and experiments 39
3.4.1 Settings 39
3.4.2 Experiments on single image super-resolution (SISR) 41
3.5 Discussion 48
3.5.1 Effect of the complexity-based penalty to the performance of controller 49
3.5.2 Effect of multi-branch structure and partial parameter sharing scheme 50
3.5.3 Effect of gradient flow control weights and complexity-based penalty coefficient 51
3.6 Summary 52
4 Meta-transfer learning for simultaneous search of various scale image super-resolution 54
4.1 Introduction 54
4.2 Related Work 56
4.2.1 Single image super-resolution 56
4.2.2 Neural architecture search 57
4.2.3 Image super-resolution with neural architecture search 58
4.2.4 Meta-learning 59
4.3 Method 59
4.3.1 Meta-learning 60
4.3.2 Meta-transfer learning 62
4.3.3 Transfer-learning 63
4.4 datasets and experiments 63
4.4.1 Settings 63
4.4.2 Experiments on single image super-resolution(SISR) 64
4.5 Summary 66
5 Conclusion 69
Abstract (In Korean) 80λ°
Spiking Generative Adversarial Networks With a Neural Network Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning
Neuromorphic data carries information in spatio-temporal patterns encoded by
spikes. Accordingly, a central problem in neuromorphic computing is training
spiking neural networks (SNNs) to reproduce spatio-temporal spiking patterns in
response to given spiking stimuli. Most existing approaches model the
input-output behavior of an SNN in a deterministic fashion by assigning each
input to a specific desired output spiking sequence. In contrast, in order to
fully leverage the time-encoding capacity of spikes, this work proposes to
train SNNs so as to match distributions of spiking signals rather than
individual spiking signals. To this end, the paper introduces a novel hybrid
architecture comprising a conditional generator, implemented via an SNN, and a
discriminator, implemented by a conventional artificial neural network (ANN).
The role of the ANN is to provide feedback during training to the SNN within an
adversarial iterative learning strategy that follows the principle of
generative adversarial network (GANs). In order to better capture multi-modal
spatio-temporal distribution, the proposed approach -- termed SpikeGAN -- is
further extended to support Bayesian learning of the generator's weight.
Finally, settings with time-varying statistics are addressed by proposing an
online meta-learning variant of SpikeGAN. Experiments bring insights into the
merits of the proposed approach as compared to existing solutions based on
(static) belief networks and maximum likelihood (or empirical risk
minimization)
- β¦