7,743 research outputs found
Geometric Cross-Modal Comparison of Heterogeneous Sensor Data
In this work, we address the problem of cross-modal comparison of aerial data
streams. A variety of simulated automobile trajectories are sensed using two
different modalities: full-motion video, and radio-frequency (RF) signals
received by detectors at various locations. The information represented by the
two modalities is compared using self-similarity matrices (SSMs) corresponding
to time-ordered point clouds in feature spaces of each of these data sources;
we note that these feature spaces can be of entirely different scale and
dimensionality. Several metrics for comparing SSMs are explored, including a
cutting-edge time-warping technique that can simultaneously handle local time
warping and partial matches, while also controlling for the change in geometry
between feature spaces of the two modalities. We note that this technique is
quite general, and does not depend on the choice of modalities. In this
particular setting, we demonstrate that the cross-modal distance between SSMs
corresponding to the same trajectory type is smaller than the cross-modal
distance between SSMs corresponding to distinct trajectory types, and we
formalize this observation via precision-recall metrics in experiments.
Finally, we comment on promising implications of these ideas for future
integration into multiple-hypothesis tracking systems.Comment: 10 pages, 13 figures, Proceedings of IEEE Aeroconf 201
NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding
Research on depth-based human activity analysis achieved outstanding
performance and demonstrated the effectiveness of 3D representation for action
recognition. The existing depth-based and RGB+D-based action recognition
benchmarks have a number of limitations, including the lack of large-scale
training samples, realistic number of distinct class categories, diversity in
camera views, varied environmental conditions, and variety of human subjects.
In this work, we introduce a large-scale dataset for RGB+D human action
recognition, which is collected from 106 distinct subjects and contains more
than 114 thousand video samples and 8 million frames. This dataset contains 120
different action classes including daily, mutual, and health-related
activities. We evaluate the performance of a series of existing 3D activity
analysis methods on this dataset, and show the advantage of applying deep
learning methods for 3D-based human action recognition. Furthermore, we
investigate a novel one-shot 3D activity recognition problem on our dataset,
and a simple yet effective Action-Part Semantic Relevance-aware (APSR)
framework is proposed for this task, which yields promising results for
recognition of the novel action classes. We believe the introduction of this
large-scale dataset will enable the community to apply, adapt, and develop
various data-hungry learning techniques for depth-based and RGB+D-based human
activity understanding. [The dataset is available at:
http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI
HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with vision transformer
Vehicle-to-Vehicle technologies have enabled autonomous vehicles to share
information to see through occlusions, greatly enhancing perception
performance. Nevertheless, existing works all focused on homogeneous traffic
where vehicles are equipped with the same type of sensors, which significantly
hampers the scale of collaboration and benefit of cross-modality interactions.
In this paper, we investigate the multi-agent hetero-modal cooperative
perception problem where agents may have distinct sensor modalities. We present
HM-ViT, the first unified multi-agent hetero-modal cooperative perception
framework that can collaboratively predict 3D objects for highly dynamic
vehicle-to-vehicle (V2V) collaborations with varying numbers and types of
agents. To effectively fuse features from multi-view images and LiDAR point
clouds, we design a novel heterogeneous 3D graph transformer to jointly reason
inter-agent and intra-agent interactions. The extensive experiments on the V2V
perception dataset OPV2V demonstrate that the HM-ViT outperforms SOTA
cooperative perception methods for V2V hetero-modal cooperative perception. We
will release codes to facilitate future research
- …