140 research outputs found
Geo-Social Group Queries with Minimum Acquaintance Constraint
The prosperity of location-based social networking services enables
geo-social group queries for group-based activity planning and marketing. This
paper proposes a new family of geo-social group queries with minimum
acquaintance constraint (GSGQs), which are more appealing than existing
geo-social group queries in terms of producing a cohesive group that guarantees
the worst-case acquaintance level. GSGQs, also specified with various spatial
constraints, are more complex than conventional spatial queries; particularly,
those with a strict NN spatial constraint are proved to be NP-hard. For
efficient processing of general GSGQ queries on large location-based social
networks, we devise two social-aware index structures, namely SaR-tree and
SaR*-tree. The latter features a novel clustering technique that considers both
spatial and social factors. Based on SaR-tree and SaR*-tree, efficient
algorithms are developed to process various GSGQs. Extensive experiments on
real-world Gowalla and Dianping datasets show that our proposed methods
substantially outperform the baseline algorithms based on R-tree.Comment: This is the preprint version that is accepted by the Very Large Data
Bases Journa
FastHuman: Reconstructing High-Quality Clothed Human in Minutes
We propose an approach for optimizing high-quality clothed human body shapes
in minutes, using multi-view posed images. While traditional neural rendering
methods struggle to disentangle geometry and appearance using only rendering
loss, and are computationally intensive, our method uses a mesh-based patch
warping technique to ensure multi-view photometric consistency, and sphere
harmonics (SH) illumination to refine geometric details efficiently. We employ
oriented point clouds' shape representation and SH shading, which significantly
reduces optimization and rendering times compared to implicit methods. Our
approach has demonstrated promising results on both synthetic and real-world
datasets, making it an effective solution for rapidly generating high-quality
human body shapes. Project page
\href{https://l1346792580123.github.io/nccsfs/}{https://l1346792580123.github.io/nccsfs/}Comment: International Conference on 3D Vision, 3DV 202
Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning
As the most essential property in a video, motion information is critical to
a robust and generalized video representation. To inject motion dynamics,
recent works have adopted frame difference as the source of motion information
in video contrastive learning, considering the trade-off between quality and
cost. However, existing works align motion features at the instance level,
which suffers from spatial and temporal weak alignment across modalities. In
this paper, we present a \textbf{Fi}ne-grained \textbf{M}otion
\textbf{A}lignment (FIMA) framework, capable of introducing well-aligned and
significant motion information. Specifically, we first develop a dense
contrastive learning framework in the spatiotemporal domain to generate
pixel-level motion supervision. Then, we design a motion decoder and a
foreground sampling strategy to eliminate the weak alignments in terms of time
and space. Moreover, a frame-level motion contrastive loss is presented to
improve the temporal diversity of the motion features. Extensive experiments
demonstrate that the representations learned by FIMA possess great
motion-awareness capabilities and achieve state-of-the-art or competitive
results on downstream tasks across UCF101, HMDB51, and Diving48 datasets. Code
is available at \url{https://github.com/ZMHH-H/FIMA}.Comment: ACM MM 2023 Camera Read
- …