18 research outputs found

    Toward Resolution-Invariant Person Reidentification via Projective Dictionary Learning

    Get PDF
    Person reidentification (ReID) has recently been widely investigated for its vital role in surveillance and forensics applications. This paper addresses the low-resolution (LR) person ReID problem, which is of great practical meaning because pedestrians are often captured in LRs by surveillance cameras. Existing methods cope with this problem via some complicated and time-consuming strategies, making them less favorable, in practice, and meanwhile, their performances are far from satisfactory. Instead, we solve this problem by developing a discriminative semicoupled projective dictionary learning (DSPDL) model, which adopts the efficient projective dictionary learning strategy, and jointly learns a pair of dictionaries and a mapping function to model the correspondence of the cross-view data. A parameterless cross-view graph regularizer incorporating both positive and negative pair information is designed to enhance the discriminability of the dictionaries. Another weakness of existing approaches to this problem is that they are only applicable for the scenario where the cross-camera image sets have a globally uniform resolution gap. This fact undermines their practicality because the resolution gaps between cross-camera images often vary person by person in practice. To overcome this hurdle, we extend the proposed DSPDL model to the variational resolution gap scenario, basically by learning multiple pairs of dictionaries and multiple mapping functions. A novel technique is proposed to rerank and fuse the results obtained from all dictionary pairs. Experiments on five public data sets show the proposed method achieves superior performances to the state-of-the-art ones

    Subspace Representations and Learning for Visual Recognition

    Get PDF
    Pervasive and affordable sensor and storage technology enables the acquisition of an ever-rising amount of visual data. The ability to extract semantic information by interpreting, indexing and searching visual data is impacting domains such as surveillance, robotics, intelligence, human- computer interaction, navigation, healthcare, and several others. This further stimulates the investigation of automated extraction techniques that are more efficient, and robust against the many sources of noise affecting the already complex visual data, which is carrying the semantic information of interest. We address the problem by designing novel visual data representations, based on learning data subspace decompositions that are invariant against noise, while being informative for the task at hand. We use this guiding principle to tackle several visual recognition problems, including detection and recognition of human interactions from surveillance video, face recognition in unconstrained environments, and domain generalization for object recognition.;By interpreting visual data with a simple additive noise model, we consider the subspaces spanned by the model portion (model subspace) and the noise portion (variation subspace). We observe that decomposing the variation subspace against the model subspace gives rise to the so-called parity subspace. Decomposing the model subspace against the variation subspace instead gives rise to what we name invariant subspace. We extend the use of kernel techniques for the parity subspace. This enables modeling the highly non-linear temporal trajectories describing human behavior, and performing detection and recognition of human interactions. In addition, we introduce supervised low-rank matrix decomposition techniques for learning the invariant subspace for two other tasks. We learn invariant representations for face recognition from grossly corrupted images, and we learn object recognition classifiers that are invariant to the so-called domain bias.;Extensive experiments using the benchmark datasets publicly available for each of the three tasks, show that learning representations based on subspace decompositions invariant to the sources of noise lead to results comparable or better than the state-of-the-art

    PAC-GAN:An Effective Pose Augmentation Scheme for Unsupervised Cross-View Person Re-identification

    Get PDF
    Person re-identification (person Re-Id) aims to retrieve the pedestrian images of the same person that captured by disjoint and non-overlapping cameras. Lots of researchers recently focused on this hot issue and proposed deep learning based methods to enhance the recognition rate in a supervised or unsupervised manner. However,there are two limitations that cannot be ignored: firstly, compared with other image retrieval benchmarks, the size of existing person Re-Id datasets is far from meeting the requirement, which cannot provide sufficient pedestrian samples for the training of deep model; secondly, the samples in existing datasets do not have sufficient human motions or postures coverage to provide more priori knowledges for learning. In this paper, we introduce a novel unsupervised pose augmentation cross-view person Re-Id scheme called PAC-GAN to overcome these limitations. We firstly present the formal definition of cross-view pose augmentation and then propose the framework of PAC-GAN that is a novel conditional generative adversarial network (CGAN) based approach to improve the performance of unsupervised corss-view person Re-Id. Specifically, the pose generation model in PAC-GAN called CPG-Net is to generate enough quantity of pose-rich samples from original image and skeleton samples. The pose augmentation dataset is produced by combining the synthesized pose-rich samples with the original samples, which is fed into the corss-view person Re-Id model named Cross-GAN. Besides, we use weight-sharing strategy in the CPG-Net to improve the quality of new generated samples. To the best of our knowledge, we are the first to enhance the unsupervised cross-view person Re-Id by pose augmentation, and the results of extensive experiments show that the proposed scheme can combat the state-of-the-arts with recognition rate

    Person Re-identification Using Spatial and Layer-Wise Attention

    Get PDF

    Vision-based Person Re-identification in a Queue

    Get PDF

    Resource Allocation in Computer Vision

    Get PDF
    We broadly examine resource allocation in several computer vision problems. We consider human resource or computational resource constraints. Human resources, such as human operators monitoring a camera network, provide reliable information, but are typically limited by the huge amount of data to be processed. Computational resources refer to the resources used by machines, such as running time, to execute the programs. It is important to develop algorithms to make effective use of these resources in computer vision applications. We approach human resource constraints with a frame retrieval problem in a camera network. This work addresses the problem of using active inference to direct human attention in searching a camera network for people that match a query image. We find that by representing the camera network using a graphical model, we can more accurately determine which video frames match the query, and improve our ability to direct human attention. We experiment with different methods to determine from which frames to sample expert information from humans, and discover that a method that learns to predict which frame is misclassified gives the best performance. We approach the problem of allocating computational resource in a video processing task. We consider a video processing application in which we combine the outputs from two algorithms so that the budget-limited computationally more expensive algorithm is run in the most useful video frames to maximize processing performance. We model the video frames as a chain graphical model and extend a dynamic programming algorithm to determine on which frames to run the more expensive algorithm. We perform experiments on moving object detection and face detection to demonstrate the effectiveness of our approaches. Finally, we consider an idea for saving computational resources and maintaining program performance. We work on a problem of learning model complexity in latent variable models. Specifically, we learn the latent variable state space complexity in latent support vector machines using group norm regularization. We apply our method to handwritten digit recognition and object detection with deformable part models. Our approach reduces latent variable state size and performs faster inference with similar or better performance

    Deep-learning feature descriptor for tree bark re-identification

    Get PDF
    L’habilité de visuellement ré-identifier des objets est une capacité fondamentale des systèmes de vision. Souvent, ces systèmes s’appuient sur une collection de signatures visuelles basées sur des descripteurs comme SIFT ou SURF. Cependant, ces descripteurs traditionnels ont été conçus pour un certain domaine d’aspects et de géométries de surface (relief limité). Par conséquent, les surfaces très texturées telles que l’écorce des arbres leur posent un défi. Alors, cela rend plus difficile l’utilisation des arbres comme points de repère identifiables à des fins de navigation (robotique) ou le suivi du bois abattu le long d’une chaîne logistique (logistique). Nous proposons donc d’utiliser des descripteurs basés sur les données, qui une fois entraîné avec des images d’écorce, permettront la ré-identification de surfaces d’arbres. À cet effet, nous avons collecté un grand ensemble de données contenant 2 400 images d’écorce présentant de forts changements d’éclairage, annotées par surface et avec la possibilité d’être alignées au pixels près. Nous avons utilisé cet ensemble de données pour échantillonner parmis plus de 2 millions de parcelle d’image de 64x64 pixels afin d’entraîner nos nouveaux descripteurs locaux DeepBark et SqueezeBark. Notre méthode DeepBark a montré un net avantage par rapport aux descripteurs fabriqués à la main SIFT et SURF. Par exemple, nous avons démontré que DeepBark peut atteindre une mAP de 87.2% lorsqu’il doit retrouver 11 images d’écorce pertinentes, i.e correspondant à la même surface physique, à une image requête parmis 7,900 images. Notre travail suggère donc qu’il est possible de ré-identifier la surfaces des arbres dans un contexte difficile, tout en rendant public un nouvel ensemble de données.The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes,it relies on collections of visual signatures based on descriptors, such as SIFT orSURF. However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turn, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel align them. We used this dataset to sample from more than 2 million 64 64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. For instance, we demonstrated that DeepBark can reach a mAP of 87.2% when retrieving 11 relevant barkimages, i.e. corresponding to the same physical surface, to a bark query against 7,900 images. ur work thus suggests that re-identifying tree surfaces in a challenging illuminations contextis possible. We also make public our dataset, which can be used to benchmark surfacere-identification techniques

    Toward Resolution-Invariant Person Reidentification via Projective Dictionary Learning

    No full text
    corecore