30,236 research outputs found

    Development Of Hierarchical Skin-Adaboost-Neural Network (H-Skann) For Multiface Detection In Video Surveillance System

    Get PDF
    Automatic face detection is mainly the first step for most of the face-based biometric systems today such as face recognition, facial expression recognition, and tracking head pose. However, face detection technology has various drawbacks caused by challenges in indoor and outdoor environment such as uncontrolled lighting and illumination, features occlusions and pose variation. This thesis proposed a technique to detect multiface in video surveillance application with strategic architecture algorithm based on the hierarchical and structural design. This technique consists of two major blocks which are known as Face Skin Localization (FSL) and Hierarchical Skin Area (HSA). FSL is formulated to extract valuable skin data to be processed at the first stage of system detection, which also includes Face Skin Merging (FSM) in order to correctly merge separated skin areas. HSA is proposed to extend the searching of face candidates in selected segmentation area based on the hierarchical architecture strategy, in which each level of the hierarchy employs an integration of Adaboost and Neural Network Algorithm. Experiments were conducted on eleven types database which consists of various challenges to human face detection system. Results reveal that the proposed H-SKANN achieves 98.03% and 97.02% of of averaged accuracy for benchmark database and surveillance area databases, respectively

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario

    A distributed camera system for multi-resolution surveillance

    Get PDF
    We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance

    A framework for evaluating stereo-based pedestrian detection techniques

    Get PDF
    Automated pedestrian detection, counting, and tracking have received significant attention in the computer vision community of late. As such, a variety of techniques have been investigated using both traditional 2-D computer vision techniques and, more recently, 3-D stereo information. However, to date, a quantitative assessment of the performance of stereo-based pedestrian detection has been problematic, mainly due to the lack of standard stereo-based test data and an agreed methodology for carrying out the evaluation. This has forced researchers into making subjective comparisons between competing approaches. In this paper, we propose a framework for the quantitative evaluation of a short-baseline stereo-based pedestrian detection system. We provide freely available synthetic and real-world test data and recommend a set of evaluation metrics. This allows researchers to benchmark systems, not only with respect to other stereo-based approaches, but also with more traditional 2-D approaches. In order to illustrate its usefulness, we demonstrate the application of this framework to evaluate our own recently proposed technique for pedestrian detection and tracking
    corecore