49 research outputs found

    Multi-sensory face biometric fusion (for personal identification)

    Full text link
    The objective of this work is to recognize faces using sets of images in visual and thermal spectra. This is challenging because the former is greatly affected by illumination changes, while the latter frequently contains occlusions due to eye-wear and is inherently less discriminative. Our method is based on a fusion of the two modalities. Specifically: we examine (i) the effects of preprocessing of data in each domain, (ii) the fusion of holistic and local facial appearance, and (iii) propose an algorithm for combining the similarity scores in visual and thermal spectra in the presence of prescription glasses and significant pose variations, using a small number of training images (5-7). Our system achieved a high correct identification rate of 97% on a freely available test set of 29 individuals and extreme illumination changes

    Segmentation and Counting of People Through Collaborative Augmented Environment

    Get PDF
    People counting system have wide potential application including video surveillance and public resources management. Also with rapid development of economic society, crowd flowing in varies public places and facility is more and more frequent. Effectively managing and controlling crowd in public places become an important issue. People counting system based on this kind of demand arises, which can be used in commercial domain such as market survey, traffic management as well as architectural design domain. For example suppose there is a crowd gathering at specific place then it indicates an unusual situation and second one if counting of people is done in shopping mall then it provides valuable information for optimizing trading hours, as well as evaluating the attractiveness of some shopping areas

    You'll never walk alone: Modeling social behavior for multi-target tracking

    Full text link

    Robust People Tracking with Global Trajectory Optimization

    Get PDF
    Given three or four synchronized videos taken at eye level and from different angles, we show that we can effectively use dynamic programming to accurately follow up to six individuals across thousands of frames in spite of significant occlusions. In addition, we also derive metrically accurate trajectories for each one of them. Our main contribution is to show that multi-person tracking can be reliably achieved by processing individual trajectories separately over long sequences, provided that a reasonable heuristic is used to rank these individuals and avoid confusing them with one another. In this way, we achieve robustness by finding optimal trajectories over many frames while avoiding the combinatorial explosion that would result from simultaneously dealing with all the individuals

    Fixed Point Probability Field for Occlusion Handling

    Get PDF
    In this paper, we show that in a multi-camera context, we can effectively handle occlusions at each time frame independently, even when the only available data comes from the binary output of a fairly primitive motion detector. We start from occupancy probability estimates in a top view and rely on a generative model to yield probability images to be compared with the actual input images. We then refine the estimates so that the probability images match the binary input images as well as possible. We demonstrate the quality of our results on several sequences involving complex occlusions

    Detecting and tracking multiple interacting objects without class-specific models

    Get PDF
    We propose a framework for detecting and tracking multiple interacting objects from a single, static, uncalibrated camera. The number of objects is variable and unknown, and object-class-specific models are not available. We use background subtraction results as measurements for object detection and tracking. Given these constraints, the main challenge is to associate pixel measurements with (possibly interacting) object targets. We first track clusters of pixels, and note when they merge or split. We then build an inference graph, representing relations between the tracked clusters. Using this graph and a generic object model based on spatial connectedness and coherent motion, we label the tracked clusters as whole objects, fragments of objects or groups of interacting objects. The outputs of our algorithm are entire tracks of objects, which may include corresponding tracks from groups of objects during interactions. Experimental results on multiple video sequences are shown
    corecore