2,355 research outputs found
DecideNet: Counting Varying Density Crowds Through Attention Guided Detection and Density Estimation
In real-world crowd counting applications, the crowd densities vary greatly
in spatial and temporal domains. A detection based counting method will
estimate crowds accurately in low density scenes, while its reliability in
congested areas is downgraded. A regression based approach, on the other hand,
captures the general density information in crowded regions. Without knowing
the location of each person, it tends to overestimate the count in low density
areas. Thus, exclusively using either one of them is not sufficient to handle
all kinds of scenes with varying densities. To address this issue, a novel
end-to-end crowd counting framework, named DecideNet (DEteCtIon and Density
Estimation Network) is proposed. It can adaptively decide the appropriate
counting mode for different locations on the image based on its real density
conditions. DecideNet starts with estimating the crowd density by generating
detection and regression based density maps separately. To capture inevitable
variation in densities, it incorporates an attention module, meant to
adaptively assess the reliability of the two types of estimations. The final
crowd counts are obtained with the guidance of the attention module to adopt
suitable estimations from the two kinds of density maps. Experimental results
show that our method achieves state-of-the-art performance on three challenging
crowd counting datasets.Comment: CVPR 201
Focus for Free in Density-Based Counting
This work considers supervised learning to count from images and their
corresponding point annotations. Where density-based counting methods typically
use the point annotations only to create Gaussian-density maps, which act as
the supervision signal, the starting point of this work is that point
annotations have counting potential beyond density map generation. We introduce
two methods that repurpose the available point annotations to enhance counting
performance. The first is a counting-specific augmentation that leverages point
annotations to simulate occluded objects in both input and density images to
enhance the network's robustness to occlusions. The second method, foreground
distillation, generates foreground masks from the point annotations, from which
we train an auxiliary network on images with blacked-out backgrounds. By doing
so, it learns to extract foreground counting knowledge without interference
from the background. These methods can be seamlessly integrated with existing
counting advances and are adaptable to different loss functions. We demonstrate
complementary effects of the approaches, allowing us to achieve robust counting
results even in challenging scenarios such as background clutter, occlusion,
and varying crowd densities. Our proposed approach achieves strong counting
results on multiple datasets, including ShanghaiTech Part\_A and Part\_B,
UCF\_QNRF, JHU-Crowd++, and NWPU-Crowd.Comment: 18 page
- …