164,985 research outputs found

    Face recognition-based real-time system for surveillance

    Get PDF
    The ability to automatically recognize human faces based on dynamic facial images is important in security, surveillance and the health/independent living domains. Specific applications include access control to secure environments, identification of individuals at a particular place and intruder detection. This research proposes a real-time system for surveillance using cameras. The process is broken into two steps: (1) face detection and (2) face recognition to identify particular persons. For the first step, the system tracks and selects the faces of the detected persons. An efficient recognition algorithm is then used to recognize detected faces with a known database. The proposed approach exploits the Viola-Jones method for face detection, the Kanade-Lucas-Tomasi algorithm as a feature tracker and Principal Component Analysis (PCA) for face recognition. This system can be implemented at different restricted areas, such as at the office or house of a suspicious person or at the entrance of a sensitive installation. The system works almost perfectly under reasonable lighting conditions and image depths

    Real time face matching with multiple cameras using principal component analysis

    Get PDF
    Face recognition is a rapidly advancing research topic due to the large number of applications that can benefit from it. Face recognition consists of determining whether a known face is present in an image and is typically composed of four distinct steps. These steps are face detection, face alignment, feature extraction, and face classification [1]. The leading application for face recognition is video surveillance. The majority of current research in face recognition has focused on determining if a face is present in an image, and if so, which subject in a known database is the closest match. This Thesis deals with face matching, which is a subset of face recognition, focusing on face identification, yet it is an area where little research has been done. The objective of face matching is to determine, in real-time, the degree of confidence to which a live subject matches a facial image. Applications for face matching include video surveillance, determination of identification credentials, computer-human interfaces, and communications security. The method proposed here employs principal component analysis [16] to create a method of face matching which is both computationally efficient and accurate. This method is integrated into a real time system that is based upon a two camera setup. It is able to scan the room, detect faces, and zoom in for a high quality capture of the facial features. The image capture is used in a face matching process to determine if the person found is the desired target. The performance of the system is analyzed based upon the matching accuracy for 10 unique subjects

    Camera Independent Face Recognition Algorithm In Visual Surveillance

    Get PDF
    Face recognition in visual surveillance has the ability to reduce crime rates in public area due to the suspect’s identity can be automatically identified in real-time using the face images captured by the surveillance camera as circumstantial evidence. Several available image preprocessing techniques, classifiers, and approaches had been proposed and tested to mitigate the effect of illumination variation, pose variations, and intensity quality differences due to hardware differences in such system. The face recognition system should be able to integrate seamlessly into the existing system. From the experiments, Histogram Equalization (HE) preprocessed face images scaled to 30�30 had proven to be well suited for pre-processing of surveillance images. The combination of Linear Discriminant Analysis (LDA) and HE preprocessed images managed to achieve an average recognition rate of 81.48% for the single camera training set. The flandmark facial landmark detector is implemented to determine the location of the eyes and new face images are obtained by cropping the HE pre-processed images. The combination of flandmark images at 20�30 with multi-class Support Vector Machine (SVM) is used to form a multimodal classification system with LDA and HE combination. Score level fusion is done to the normalized output scores of both the classifiers with proper weight, w assigned to each score. Finally, the watch list principle will list out several possible subjects according to their respective score ranking rather than deciding on a particular subject based on the maximum score, thus increasing the performance of the proposed system. The experimental results demonstrate the performance of the proposed algorithm on Surveillance Camera Face Database (SCface) database with 97.45% average recognition rate

    Automatic age estimation system for face images

    Full text link
    Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in real-time. This means that the proposed system has more potential in applications compared to other semi-automatic systems. The results obtained from this novel approach could provide clearer insight for operators in the field of age estimation to develop real-world applications. © 2012 Lin et al

    Integration of Multispectral Face Recognition and Multi-PTZ Camera Automated Surveillance for Security Applications

    Get PDF
    Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computer-based face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang\u27s method [18]. © Versita sp. z o.o

    An Active Monitoring System for Real-Time Face-Tracking based on Mobile Sensors

    Get PDF
    none3Surveillance systems frequently use fixed or semimobile cameras. However, in many cases, the use of intelligent mobile sensors is preferrable over fixed sensors, because the system configuration can be modified according to particular environmental conditions or adapted to compensate for one or more malfunctioning sensors. This paper proposes a real-time surveillance system based on a mobile sensor. Using an Android smartphone and a face-tracking algorithm, the system can move autonomously to track the human face with the longest presence in the video field. In addition, the system can be connected to a computer performing face-recognition through a wireless connection provided by the smartphone. This way the mobile sensor can track a determined human face. The paper provides some experimental results to validate system performance.IEEE Catalogue Number CFP12825-PRTS. Saraceni; A. Claudi; A.F. DragoniS., Saraceni; Claudi, Andrea; Dragoni, Aldo Franc

    Visual intent recognition in a multiple camera environment

    Get PDF
    Activity recognition is an active field of research with many applications for both industrial and home use. Industry might use it as part of a security surveillance system, while home uses could be in applications such as smart rooms and aids for the disabled. This thesis develops one component of a “smart system” that can recognize certain activities related to the subject’s intent, i.e. where subjects concentrate their attention. A visual intent activity recognition system that operates in near real-time is created, based on multiple cameras. To accomplish this, a combination of face detection, facial feature detection, and pose estimation is used to estimate each subject’s gaze direction. To allow for better detection of the subject’s facial features, and thus more robust pose estimation, a multiple camera system is used. A wide-view camera is zoomed out and finds the subject, while a narrow-view camera zooms in to get more details on the face. Neural networks are then used to locate the mouth and eyes. A triangle template is matched to these features and used to estimate the subject’s pose in real-time. This method is used to determine where the subjects are looking and detect the activity of looking intently at a given location. A four-camera system recognizes the activity as occurring when at least one of two subjects is looking at the other. Testing showed that, on average, the pose estimate was accurate to within 5.08 degrees. The visual intent activity recognition system was able to correctly determine when one subject was looking at the other over 95% of the time

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Survey on face detection methods

    Get PDF
    Face detection has attracted attention from many researchers due to its wide range of applications such as video surveillance, face recognition, object tracking and expression analysis. It consists of three stages which are preprocessing, feature extraction and classification. Firstly, preprocessing is the process of extracting regionsfrom images or real-time web camera, which then acts as a face or non-face candidate images. Secondly, feature extraction involves segmenting the desired features from preprocessed images. Lastly, classification is a process of clustering extracted features based on certain criteria. In this paper, 15 papers published from year 2013 to 2018 are reviewed. In general, there are seven face detection methods which are Skin Colour Segmentation, Viola and Jones, Haar features, 3D-mean shift, Cascaded Head and Shoulder detection (CHSD), and Libfacedetection. The findings show that skin colour segmentation is the most popular method used for feature extraction with 88% to 98% detection rate. Unlike skin colour segmentation method, Viola and Jones method mostly comprise of face regions and other parts of human body with 80% to 90% detection rate. OpenCV, Python or MATLAB can be used to develop real-life face detection system
    corecore