3,269 research outputs found

    Ambient Sound Helps: Audiovisual Crowd Counting in Extreme Conditions

    Get PDF
    Visual crowd counting has been recently studied as a way to enable people counting in crowd scenes from images. Albeit successful, vision-based crowd counting approaches could fail to capture informative features in extreme conditions, e.g., imaging at night and occlusion. In this work, we introduce a novel task of audiovisual crowd counting, in which visual and auditory information are integrated for counting purposes. We collect a large-scale benchmark, named auDiovISual Crowd cOunting (DISCO) dataset, consisting of 1,935 images and the corresponding audio clips, and 170,270 annotated instances. In order to fuse the two modalities, we make use of a linear feature-wise fusion module that carries out an affine transformation on visual and auditory features. Finally, we conduct extensive experiments using the proposed dataset and approach. Experimental results show that introducing auditory information can benefit crowd counting under different illumination, noise, and occlusion conditions. The dataset and code will be released. Code and data have been made availabl

    Investigating compositional visual knowledge through challenging visual tasks

    Get PDF
    Human vision manifests remarkable robustness to recognize objects from the visual world filled with a chaotic, dynamic assortment of information. Computationally, our visual system is challenged by the enormous variability in two-dimensional projected images as a function of viewpoint, lighting, material, articulation as well as occlusion. Many past research investigated the underlying representations and computational principles that support human vision robustness with controlled and simplified visual stimuli. Nevertheless, the generality of these findings was unclear until tested on more challenging and more naturalistic stimuli. In this thesis, I study human vision robustness with several challenging visual tasks and more naturalistic stimuli, including the recognition of occluded objects and the recognition of non-rigid human bodies from natural images of scenes. I use psychophysics, functional magnetic resonance imaging as well as computational modeling approaches to measure human vision robustness and examine the hierarchical, compositional framework as the underlying principle where the representation of the whole is composed of the representation of its parts through different hierarchies. I show that human vision has impressive abilities to recognize heavily occluded natural objects, and the human behavioral performance is better explained by compositional models rather than standard deep convolutional neural networks. In addition, I also show that human vision can rapidly and robustly extract information about spatial relationships between human body parts and discriminate three-dimensional non-rigid human poses even from a mere glance. Lastly, I show that there exists a distributed cortical network that encodes compositional pose representations with different view invariance and depth sensitivity, and the difference in these neural representations might be driven by the diversity of the supported behavior tasks. Taken together, this thesis demonstrates that human vision manifests great robustness even in these challenging visual tasks, and that the hierarchical, compositional framework may be one of the underlying principles supporting such robustness

    CoKe: Localized Contrastive Learning for Robust Keypoint Detection

    Full text link
    Today's most popular approaches to keypoint detection involve very complex network architectures that aim to learn holistic representations of all keypoints. In this work, we take a step back and ask: Can we simply learn a local keypoint representation from the output of a standard backbone architecture? This will help make the network simpler and more robust, particularly if large parts of the object are occluded. We demonstrate that this is possible by looking at the problem from the perspective of representation learning. Specifically, the keypoint kernels need to be chosen to optimize three types of distances in the feature space: Features of the same keypoint should be similar to each other, while differing from those of other keypoints, and also being distinct from features from the background clutter. We formulate this optimization process within a framework, which we call CoKe, which includes supervised contrastive learning. CoKe needs to make several approximations to enable representation learning process on large datasets. In particular, we introduce a clutter bank to approximate non-keypoint features, and a momentum update to compute the keypoint representation while training the feature extractor. Our experiments show that CoKe achieves state-of-the-art results compared to approaches that jointly represent all keypoints holistically (Stacked Hourglass Networks, MSS-Net) as well as to approaches that are supervised by detailed 3D object geometry (StarMap). Moreover, CoKe is robust and performs exceptionally well when objects are partially occluded and significantly outperforms related work on a range of diverse datasets (PASCAL3D+, MPII, ObjectNet3D)

    Compositional Convolutional Neural Networks: A Deep Architecture with Innate Robustness to Partial Occlusion

    Full text link
    Recent findings show that deep convolutional neural networks (DCNNs) do not generalize well under partial occlusion. Inspired by the success of compositional models at classifying partially occluded objects, we propose to integrate compositional models and DCNNs into a unified deep model with innate robustness to partial occlusion. We term this architecture Compositional Convolutional Neural Network. In particular, we propose to replace the fully connected classification head of a DCNN with a differentiable compositional model. The generative nature of the compositional model enables it to localize occluders and subsequently focus on the non-occluded parts of the object. We conduct classification experiments on artificially occluded images as well as real images of partially occluded objects from the MS-COCO dataset. The results show that DCNNs do not classify occluded objects robustly, even when trained with data that is strongly augmented with partial occlusions. Our proposed model outperforms standard DCNNs by a large margin at classifying partially occluded objects, even when it has not been exposed to occluded objects during training. Additional experiments demonstrate that CompositionalNets can also localize the occluders accurately, despite being trained with class labels only. The code used in this work is publicly available.Comment: CVPR 2020; Code is available https://github.com/AdamKortylewski/CompositionalNets; Supplementary material: https://adamkortylewski.com/data/compnet_supp.pd

    Face recognition technologies for evidential evaluation of video traces

    Get PDF
    Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future
    • …
    corecore