874 research outputs found

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    3D object retrieval and segmentation: various approaches including 2D poisson histograms and 3D electrical charge distributions.

    Get PDF
    Nowadays 3D models play an important role in many applications: viz. games, cultural heritage, medical imaging etc. Due to the fast growth in the number of available 3D models, understanding, searching and retrieving such models have become interesting fields within computer vision. In order to search and retrieve 3D models, we present two different approaches: one is based on solving the Poisson Equation over 2D silhouettes of the models. This method uses 60 different silhouettes, which are automatically extracted from different viewangles. Solving the Poisson equation for each silhouette assigns a number to each pixel as its signature. Accumulating these signatures generates a final histogram-based descriptor for each silhouette, which we call a SilPH (Silhouette Poisson Histogram). For the second approach, we propose two new robust shape descriptors based on the distribution of charge density on the surface of a 3D model. The Finite Element Method is used to calculate the charge density on each triangular face of each model as a local feature. Then we utilize the Bag-of-Features and concentric sphere frameworks to perform global matching using these local features. In addition to examining the retrieval accuracy of the descriptors in comparison to the state-of-the-art approaches, the retrieval speeds as well as robustness to noise and deformation on different datasets are investigated. On the other hand, to understand new complex models, we have also utilized distribution of electrical charge for proposing a system to decompose models into meaningful parts. Our robust, efficient and fully-automatic segmentation approach is able to specify the segments attached to the main part of a model as well as locating the boundary parts of the segments. The segmentation ability of the proposed system is examined on the standard datasets and its timing and accuracy are compared with the existing state-of-the-art approaches

    A very simple framework for 3D human poses estimation using a single 2D image: Comparison of geometric moments descriptors.

    Get PDF
    In this paper, we propose a framework in order to automatically extract the 3D pose of an individual from a single silhouette image obtained with a classical low-cost camera without any depth information. By pose, we mean the configuration of human bones in order to reconstruct a 3D skeleton representing the 3D posture of the detected human. Our approach combines prior learned correspondences between silhouettes and skeletons extracted from simulated 3D human models publicly available on the internet. The main advantages of such approach are that silhouettes can be very easily extracted from video, and 3D human models can be animated using motion capture data in order to quickly build any movement training data. In order to match detected silhouettes with simulated silhouettes, we compared geometrics invariants moments. According to our results, we show that the proposed method provides very promising results with a very low time processing

    Searching for dominant high-level features for music information retrieval

    Get PDF
    Music Information Retrieval systems are often based on the analysis of a large number of low-level audio features. When dealing with problems of musical genre description and visualization, however, it would be desirable to work with a very limited number of highly informative and discriminant macro-descriptors. In this paper we focus on a specific class of training-based descriptors, which are obtained as the loglikelihood of a Gaussian Mixture Model trained with short musical excerpts that selectively exhibit a certain semantic homogeneity. As these descriptors are critically dependent on the training sets, we approach the problem of how to automatically generate suitable training sets and optimize the associated macro-features in terms of discriminant power and informative impact. We then show the application of a set of three identified macro-features to genre visualization, tracking and classification

    Robust 3D Action Recognition through Sampling Local Appearances and Global Distributions

    Full text link
    3D action recognition has broad applications in human-computer interaction and intelligent surveillance. However, recognizing similar actions remains challenging since previous literature fails to capture motion and shape cues effectively from noisy depth data. In this paper, we propose a novel two-layer Bag-of-Visual-Words (BoVW) model, which suppresses the noise disturbances and jointly encodes both motion and shape cues. First, background clutter is removed by a background modeling method that is designed for depth data. Then, motion and shape cues are jointly used to generate robust and distinctive spatial-temporal interest points (STIPs): motion-based STIPs and shape-based STIPs. In the first layer of our model, a multi-scale 3D local steering kernel (M3DLSK) descriptor is proposed to describe local appearances of cuboids around motion-based STIPs. In the second layer, a spatial-temporal vector (STV) descriptor is proposed to describe the spatial-temporal distributions of shape-based STIPs. Using the Bag-of-Visual-Words (BoVW) model, motion and shape cues are combined to form a fused action representation. Our model performs favorably compared with common STIP detection and description methods. Thorough experiments verify that our model is effective in distinguishing similar actions and robust to background clutter, partial occlusions and pepper noise
    corecore