3,215 research outputs found
A Diagram Is Worth A Dozen Images
Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs
Sparse Attention-Based Neural Networks for Code Classification
Categorizing source codes accurately and efficiently is a challenging problem
in real-world programming education platform management. In recent years,
model-based approaches utilizing abstract syntax trees (ASTs) have been widely
applied to code classification tasks. We introduce an approach named the Sparse
Attention-based neural network for Code Classification (SACC) in this paper.
The approach involves two main steps: In the first step, source code undergoes
syntax parsing and preprocessing. The generated abstract syntax tree is split
into sequences of subtrees and then encoded using a recursive neural network to
obtain a high-dimensional representation. This step simultaneously considers
both the logical structure and lexical level information contained within the
code. In the second step, the encoded sequences of subtrees are fed into a
Transformer model that incorporates sparse attention mechanisms for the purpose
of classification. This method efficiently reduces the computational cost of
the self-attention mechanisms, thus improving the training speed while
preserving effectiveness. Our work introduces a carefully designed sparse
attention pattern that is specifically designed to meet the unique needs of
code classification tasks. This design helps reduce the influence of redundant
information and enhances the overall performance of the model. Finally, we also
deal with problems in previous related research, which include issues like
incomplete classification labels and a small dataset size. We annotated the
CodeNet dataset with algorithm-related labeling categories, which contains a
significantly large amount of data. Extensive comparative experimental results
demonstrate the effectiveness and efficiency of SACC for the code
classification tasks.Comment: 2023 3rd International Conference on Digital Society and Intelligent
Systems (DSInS 2023
- …