43 research outputs found
Craquelure as a Graph: Application of Image Processing and Graph Neural Networks to the Description of Fracture Patterns
Cracks on a painting is not a defect but an inimitable signature of an
artwork which can be used for origin examination, aging monitoring, damage
identification, and even forgery detection. This work presents the development
of a new methodology and corresponding toolbox for the extraction and
characterization of information from an image of a craquelure pattern.
The proposed approach processes craquelure network as a graph. The graph
representation captures the network structure via mutual organization of
junctions and fractures. Furthermore, it is invariant to any geometrical
distortions. At the same time, our tool extracts the properties of each node
and edge individually, which allows to characterize the pattern statistically.
We illustrate benefits from the graph representation and statistical features
individually using novel Graph Neural Network and hand-crafted descriptors
correspondingly. However, we also show that the best performance is achieved
when both techniques are merged into one framework. We perform experiments on
the dataset for paintings' origin classification and demonstrate that our
approach outperforms existing techniques by a large margin.Comment: Published in ICCV 2019 Workshop
GoGNN: Graph of Graphs Neural Network for Predicting Structured Entity Interactions
Entity interaction prediction is essential in many important applications
such as chemistry, biology, material science, and medical science. The problem
becomes quite challenging when each entity is represented by a complex
structure, namely structured entity, because two types of graphs are involved:
local graphs for structured entities and a global graph to capture the
interactions between structured entities. We observe that existing works on
structured entity interaction prediction cannot properly exploit the unique
graph of graphs model. In this paper, we propose a Graph of Graphs Neural
Network, namely GoGNN, which extracts the features in both structured entity
graphs and the entity interaction graph in a hierarchical way. We also propose
the dual-attention mechanism that enables the model to preserve the neighbor
importance in both levels of graphs. Extensive experiments on real-world
datasets show that GoGNN outperforms the state-of-the-art methods on two
representative structured entity interaction prediction tasks:
chemical-chemical interaction prediction and drug-drug interaction prediction.
Our code is available at Github.Comment: Accepted by IJCAI 202
Prototype Propagation Networks (PPN) for weakly-supervised few-shot learning on category graph
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. A variety of machine learning applications expect to achieve rapid learning from a limited number of labeled data. However, the success of most current models is the result of heavy training on big data. Meta-learning addresses this problem by extracting common knowledge across different tasks that can be quickly adapted to new tasks. However, they do not fully explore weakly-supervised information, which is usually free or cheap to collect. In this paper, we show that weakly-labeled data can significantly improve the performance of meta-learning on few-shot classification. We propose prototype propagation network (PPN) trained on few-shot tasks together with data annotated by coarse-label. Given a category graph of the targeted fine-classes and some weakly-labeled coarse-classes, PPN learns an attention mechanism which propagates the prototype of one class to another on the graph, so that the K-nearest neighbor (KNN) classifier defined on the propagated prototypes results in high accuracy across different few-shot tasks. The training tasks are generated by subgraph sampling, and the training objective is obtained by accumulating the level-wise classification loss on the subgraph. On two benchmarks, PPN significantly outperforms most recent few-shot learning methods in different settings, even when they are also allowed to train on weakly-labeled data