1,753 research outputs found
Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection
SAM is a segmentation model recently released by Meta AI Research and has
been gaining attention quickly due to its impressive performance in generic
object segmentation. However, its ability to generalize to specific scenes such
as camouflaged scenes is still unknown. Camouflaged object detection (COD)
involves identifying objects that are seamlessly integrated into their
surroundings and has numerous practical applications in fields such as
medicine, art, and agriculture. In this study, we try to ask if SAM can address
the COD task and evaluate the performance of SAM on the COD benchmark by
employing maximum segmentation evaluation and camouflage location evaluation.
We also compare SAM's performance with 22 state-of-the-art COD methods. Our
results indicate that while SAM shows promise in generic object segmentation,
its performance on the COD task is limited. This presents an opportunity for
further research to explore how to build a stronger SAM that may address the
COD task. The results of this paper are provided in
\url{https://github.com/luckybird1994/SAMCOD}
Efficient Approximation Algorithms for Spanning Centrality
Given a graph , the spanning centrality (SC) of an edge
measures the importance of for to be connected. In practice,
SC has seen extensive applications in computational biology, electrical
networks, and combinatorial optimization. However, it is highly challenging to
compute the SC of all edges (AESC) on large graphs. Existing techniques fail to
deal with such graphs, as they either suffer from expensive matrix operations
or require sampling numerous long random walks. To circumvent these issues,
this paper proposes TGT and its enhanced version TGT+, two algorithms for AESC
computation that offers rigorous theoretical approximation guarantees. In
particular, TGT remedies the deficiencies of previous solutions by conducting
deterministic graph traversals with carefully-crafted truncated lengths. TGT+
further advances TGT in terms of both empirical efficiency and asymptotic
performance while retaining result quality, based on the combination of TGT
with random walks and several additional heuristic optimizations. We
experimentally evaluate TGT+ against recent competitors for AESC using a
variety of real datasets. The experimental outcomes authenticate that TGT+
outperforms the state of the arts often by over one order of magnitude speedup
without degrading the accuracy.Comment: The technical report of the paper entitled 'Efficient Approximation
Algorithms for Spanning Centrality' in SIGKDD'2
Is the formal energy of the mid-point rule convergent?
AbstractWe obtain some formulae for calculation of the coefficients of four special types of terms in τ2k, k = 1, 2, … (1−1 corresponding to four type of (2k + 1)-vertex free unlabeled trees, k = 1, 2, …, respectively), for a fixed step size τ, in the tree-expansion of the formal energy of the mid-point rule. And, we give an estimate of the difference between the formal energy H and the standard Hamiltonian H in some domain Ω under the assumptions 1.(i)|H is smooth and bounded in Ω, and2.(ii)|the absolute values of the coefficients of the terms in τ2k are uniformly bounded by ησ2k for some constants η ≥ 1, σ > 0 and for any k ≥ 1
AMatFormer: Efficient Feature Matching via Anchor Matching Transformer
Learning based feature matching methods have been commonly studied in recent
years. The core issue for learning feature matching is to how to learn (1)
discriminative representations for feature points (or regions) within each
intra-image and (2) consensus representations for feature points across
inter-images. Recently, self- and cross-attention models have been exploited to
address this issue. However, in many scenes, features are coming with
large-scale, redundant and outliers contaminated. Previous
self-/cross-attention models generally conduct message passing on all primal
features which thus lead to redundant learning and high computational cost. To
mitigate limitations, inspired by recent seed matching methods, in this paper,
we propose a novel efficient Anchor Matching Transformer (AMatFormer) for the
feature matching problem. AMatFormer has two main aspects: First, it mainly
conducts self-/cross-attention on some anchor features and leverages these
anchor features as message bottleneck to learn the representations for all
primal features. Thus, it can be implemented efficiently and compactly. Second,
AMatFormer adopts a shared FFN module to further embed the features of two
images into the common domain and thus learn the consensus feature
representations for the matching problem. Experiments on several benchmarks
demonstrate the effectiveness and efficiency of the proposed AMatFormer
matching approach.Comment: Accepted by IEEE Transactions on Multimedia (TMM) 202
- …