788 research outputs found
The Visual Social Distancing Problem
One of the main and most effective measures to contain the recent viral
outbreak is the maintenance of the so-called Social Distancing (SD). To comply
with this constraint, workplaces, public institutions, transports and schools
will likely adopt restrictions over the minimum inter-personal distance between
people. Given this actual scenario, it is crucial to massively measure the
compliance to such physical constraint in our life, in order to figure out the
reasons of the possible breaks of such distance limitations, and understand if
this implies a possible threat given the scene context. All of this, complying
with privacy policies and making the measurement acceptable. To this end, we
introduce the Visual Social Distancing (VSD) problem, defined as the automatic
estimation of the inter-personal distance from an image, and the
characterization of the related people aggregations. VSD is pivotal for a
non-invasive analysis to whether people comply with the SD restriction, and to
provide statistics about the level of safety of specific areas whenever this
constraint is violated. We then discuss how VSD relates with previous
literature in Social Signal Processing and indicate which existing Computer
Vision methods can be used to manage such problem. We conclude with future
challenges related to the effectiveness of VSD systems, ethical implications
and future application scenarios.Comment: 9 pages, 5 figures. All the authors equally contributed to this
manuscript and they are listed by alphabetical order. Under submissio
Rethinking Object Detection in Retail Stores
The convention standard for object detection uses a bounding box to represent
each individual object instance. However, it is not practical in the
industry-relevant applications in the context of warehouses due to severe
occlusions among groups of instances of the same categories. In this paper, we
propose a new task, ie, simultaneously object localization and counting,
abbreviated as Locount, which requires algorithms to localize groups of objects
of interest with the number of instances. However, there does not exist a
dataset or benchmark designed for such a task. To this end, we collect a
large-scale object localization and counting dataset with rich annotations in
retail stores, which consists of 50,394 images with more than 1.9 million
object instances in 140 categories. Together with this dataset, we provide a
new evaluation protocol and divide the training and testing subsets to fairly
evaluate the performance of algorithms for Locount, developing a new benchmark
for the Locount task. Moreover, we present a cascaded localization and counting
network as a strong baseline, which gradually classifies and regresses the
bounding boxes of objects with the predicted numbers of instances enclosed in
the bounding boxes, trained in an end-to-end manner. Extensive experiments are
conducted on the proposed dataset to demonstrate its significance and the
analysis discussions on failure cases are provided to indicate future
directions. Dataset is available at
https://isrc.iscas.ac.cn/gitlab/research/locount-dataset.Comment: Information Erro
Explicit Attention-Enhanced Fusion for RGB-Thermal Perception Tasks
Recently, RGB-Thermal based perception has shown significant advances.
Thermal information provides useful clues when visual cameras suffer from poor
lighting conditions, such as low light and fog. However, how to effectively
fuse RGB images and thermal data remains an open challenge. Previous works
involve naive fusion strategies such as merging them at the input,
concatenating multi-modality features inside models, or applying attention to
each data modality. These fusion strategies are straightforward yet
insufficient. In this paper, we propose a novel fusion method named Explicit
Attention-Enhanced Fusion (EAEF) that fully takes advantage of each type of
data. Specifically, we consider the following cases: i) both RGB data and
thermal data, ii) only one of the types of data, and iii) none of them generate
discriminative features. EAEF uses one branch to enhance feature extraction for
i) and iii) and the other branch to remedy insufficient representations for
ii). The outputs of two branches are fused to form complementary features. As a
result, the proposed fusion method outperforms state-of-the-art by 1.6\% in
mIoU on semantic segmentation, 3.1\% in MAE on salient object detection, 2.3\%
in mAP on object detection, and 8.1\% in MAE on crowd counting. The code is
available at https://github.com/FreeformRobotics/EAEFNet
Weakly Supervised Video Salient Object Detection via Point Supervision
Video salient object detection models trained on pixel-wise dense annotation
have achieved excellent performance, yet obtaining pixel-by-pixel annotated
datasets is laborious. Several works attempt to use scribble annotations to
mitigate this problem, but point supervision as a more labor-saving annotation
method (even the most labor-saving method among manual annotation methods for
dense prediction), has not been explored. In this paper, we propose a strong
baseline model based on point supervision. To infer saliency maps with temporal
information, we mine inter-frame complementary information from short-term and
long-term perspectives, respectively. Specifically, we propose a hybrid token
attention module, which mixes optical flow and image information from
orthogonal directions, adaptively highlighting critical optical flow
information (channel dimension) and critical token information (spatial
dimension). To exploit long-term cues, we develop the Long-term Cross-Frame
Attention module (LCFA), which assists the current frame in inferring salient
objects based on multi-frame tokens. Furthermore, we label two point-supervised
datasets, P-DAVIS and P-DAVSOD, by relabeling the DAVIS and the DAVSOD dataset.
Experiments on the six benchmark datasets illustrate our method outperforms the
previous state-of-the-art weakly supervised methods and even is comparable with
some fully supervised approaches. Source code and datasets are available.Comment: accepted by ACM MM 202
Deep learning in crowd counting: A survey
Counting high-density objects quickly and accurately is a popular area of research. Crowd counting has significant social and economic value and is a major focus in artificial intelligence. Despite many advancements in this field, many of them are not widely known, especially in terms of research data. The authors proposed a three-tier standardised dataset taxonomy (TSDT). The Taxonomy divides datasets into small-scale, large-scale and hyper-scale, according to different application scenarios. This theory can help researchers make more efficient use of datasets and improve the performance of AI algorithms in specific fields. Additionally, the authors proposed a new evaluation index for the clarity of the dataset: average pixel occupied by each object (APO). This new evaluation index is more suitable for evaluating the clarity of the dataset in the object counting task than the image resolution. Moreover, the authors classified the crowd counting methods from a data-driven perspective: multi-scale networks, single-column networks, multi-column networks, multi-task networks, attention networks and weak-supervised networks and introduced the classic crowd counting methods of each class. The authors classified the existing 36 datasets according to the theory of three-tier standardised dataset taxonomy and discussed and evaluated these datasets. The authors evaluated the performance of more than 100 methods in the past five years on different levels of popular datasets. Recently, progress in research on small-scale datasets has slowed down. There are few new datasets and algorithms on small-scale datasets. The studies focused on large or hyper-scale datasets appear to be reaching a saturation point. The combined use of multiple approaches began to be a major research direction. The authors discussed the theoretical and practical challenges of crowd counting from the perspective of data, algorithms and computing resources. The field of crowd counting is moving towards combining multiple methods and requires fresh, targeted datasets. Despite advancements, the field still faces challenges such as handling real-world scenarios and processing large crowds in real-time. Researchers are exploring transfer learning to overcome the limitations of small datasets. The development of effective algorithms for crowd counting remains a challenging and important task in computer vision and AI, with many opportunities for future research.BHF, AA/18/3/34220Hope Foundation for Cancer Research,
RM60G0680GCRF,
P202PF11;Sino‐UK Industrial Fund,
RP202G0289LIAS, P202ED10, P202RE969Data
Science Enhancement Fund,
P202RE237Sino‐UK Education Fund, OP202006Fight for Sight, 24NN201Royal Society
International Exchanges Cost Share Award, RP202G0230MRC, MC_PC_17171BBSRC, RM32G0178B
- …