567 research outputs found
"Animated ghost- A librarian won't ghost you" poster
These posters were created to encourage students to ask a librarian for help with essays or other assignments
Valentine's Day "conversation hearts" posters
These posters were created to encourage students to utilized resources available to them through BU Libraries
"Black and red" @BULIBRARIES posters
These posters were created to encourage BU students to utilize resources available to them through BU Libraries
Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural Network
Accurately matching local features between a pair of images is a challenging
computer vision task. Previous studies typically use attention based graph
neural networks (GNNs) with fully-connected graphs over keypoints within/across
images for visual and geometric information reasoning. However, in the context
of feature matching, considerable keypoints are non-repeatable due to occlusion
and failure of the detector, and thus irrelevant for message passing. The
connectivity with non-repeatable keypoints not only introduces redundancy,
resulting in limited efficiency, but also interferes with the representation
aggregation process, leading to limited accuracy. Targeting towards high
accuracy and efficiency, we propose MaKeGNN, a sparse attention-based GNN
architecture which bypasses non-repeatable keypoints and leverages matchable
ones to guide compact and meaningful message passing. More specifically, our
Bilateral Context-Aware Sampling Module first dynamically samples two small
sets of well-distributed keypoints with high matchability scores from the image
pair. Then, our Matchable Keypoint-Assisted Context Aggregation Module regards
sampled informative keypoints as message bottlenecks and thus constrains each
keypoint only to retrieve favorable contextual information from intra- and
inter- matchable keypoints, evading the interference of irrelevant and
redundant connectivity with non-repeatable ones. Furthermore, considering the
potential noise in initial keypoints and sampled matchable ones, the MKACA
module adopts a matchability-guided attentional aggregation operation for purer
data-dependent context propagation. By these means, we achieve the
state-of-the-art performance on relative camera estimation, fundamental matrix
estimation, and visual localization, while significantly reducing computational
and memory complexity compared to typical attentional GNNs
ResMatch: Residual Attention Learning for Local Feature Matching
Attention-based graph neural networks have made great progress in feature
matching learning. However, insight of how attention mechanism works for
feature matching is lacked in the literature. In this paper, we rethink cross-
and self-attention from the viewpoint of traditional feature matching and
filtering. In order to facilitate the learning of matching and filtering, we
inject the similarity of descriptors and relative positions into cross- and
self-attention score, respectively. In this way, the attention can focus on
learning residual matching and filtering functions with reference to the basic
functions of measuring visual and spatial correlation. Moreover, we mine intra-
and inter-neighbors according to the similarity of descriptors and relative
positions. Then sparse attention for each point can be performed only within
its neighborhoods to acquire higher computation efficiency. Feature matching
networks equipped with our full and sparse residual attention learning
strategies are termed ResMatch and sResMatch respectively. Extensive
experiments, including feature matching, pose estimation and visual
localization, confirm the superiority of our networks
- …