267 research outputs found
Real Time Tracking and Face Recognition Using Web Camera
Much interest has been shown in the field of biometric surveillance over the past decade.
Face Recognition is a biometric recognition system that has gained much attention due to
its low intrusiveness and easy availability of input data. To humans, face recognition is a
natural ability that is an easy task. However, computerized face recognition is often
complex and inaccurate. Several good techniques such as template matching, graph
matching and eigenfaces have been developed by researchers to accomplish this task to
varying degrees of success.
In this dissertation, the eigenface approach is combined with neural networks to perform
face recognition. Face images are first projected into a feature space where eigenvectors
are extracted. The neural network performs identification and is used to train the
computer to recognize faces.
A number of very good approaches to face recognition are already available. Most of
them work well in constrained environments. Here the development of a real time face recognition system that should work well in an unconstrained environment is studied. A
tracking system is developed to work together with the face recognition algorithm. A
method using pixel difference is used to detect movements in the camera's view. A pantilt
system, using stepper motors is used to enable horizontal and vertical movements.
The face recognition algorithm is found to be working well with a recognition rate of
around 95%. Eigenface method combined with neural networks displays good
performance in terms of accuracy and the ability for learning and generalization. The
tracking system works well for objects traveling speeds below 5mIs and at distances from
between 0.5m to 2m from the camera. Several improvements are suggested to improve
the tracking system performance. An overview of some leading tracking and face
recognition systems and scope of future work in this area is discussed
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Pedestrian Detection and Tracking in Video Surveillance System: Issues, Comprehensive Review, and Challenges
Pedestrian detection and monitoring in a surveillance system are critical for numerous utility areas which encompass unusual event detection, human gait, congestion or crowded vicinity evaluation, gender classification, fall detection in elderly humans, etc. Researchers’ primary focus is to develop surveillance system that can work in a dynamic environment, but there are major issues and challenges involved in designing such systems. These challenges occur at three different levels of pedestrian detection, viz. video acquisition, human detection, and its tracking. The challenges in acquiring video are, viz. illumination variation, abrupt motion, complex background, shadows, object deformation, etc. Human detection and tracking challenges are varied poses, occlusion, crowd density area tracking, etc. These results in lower recognition rate. A brief summary of surveillance system along with comparisons of pedestrian detection and tracking technique in video surveillance is presented in this chapter. The publicly available pedestrian benchmark databases as well as the future research directions on pedestrian detection have also been discussed
Face Occlusion Detection Using Deep Convolutional Neural Networks
With the rise of crimes associated with Automated Teller Machines (ATMs), security reinforcement by surveillance techniques has been a hot topic on the security agenda. As a result, cameras are frequently installed with ATMs, so as to capture the facial images of users. The main objective is to support follow-up criminal investigations in the event of an incident. However, in the case of miss-use, the user’s face is often occluded. Therefore, face occlusion detection has become very important to prevent crimes connected with ATM usage. Traditional approaches to solving the problem typically comprise a succession of steps: localization, segmentation, feature extraction and recognition. This paper proposes an end-to-end facial occlusion detection framework, which is robust and effective by combining region proposal algorithm and Convolutional Neural Networks (CNN). The framework utilizes a coarse-to-fine strategy, which consists of two CNNs. The first CNN detects the head element within an upper body image while the second distinguishes which facial part is occluded from the head image. In comparison with previous approaches, the usage of CNN is optimal from a system point of view as the design is based on the end-to-end principle and the model operates directly on image pixels. For evaluation purposes, a face occlusion database consisting of over 50[Formula: see text]000 images, with annotated facial parts, was used. Experimental results revealed that the proposed framework is very effective. Using the bespoke face occlusion dataset, Aleix and Robert (AR) face dataset and the Labeled Face in the Wild (LFW) database, we achieved over 85.61%, 97.58% and 100% accuracies for head detection when the Intersection over Union-section (IoU) is larger than 0.5, and 94.55%, 98.58% and 95.41% accuracies for occlusion discrimination, respectively. </jats:p
COMPACT: biometric dataset of face images acquired in uncontrolled indoor environment
Biometric databases are important components that help to improve state-of-the-art recognition performance. The availability of more and more difficult data attracts the researchers' attention, who systematically develop novel recognition algorithms and increase identification accuracy. Surprisingly, most of the popular face datasets, like LFW or IJBA are not fully unconstrained. The majority of the available images were not acquired on-the-move, which reduces the amount of blur caused by motion or incorrect focusing. Therefore, in this paper, the COMPACT database for studying less-cooperative face recognition is introduced. The dataset consists of high-resolution images of 108 subjects acquired in a fully automated manner as people go through the recognition gate. This ensures that the collected data contains the real world degradation factors: different distances, expressions, occlusions, pose variations and motion blur. Additionally, the authors conducted a series of experiments that verify face recognition performance on the collected data
Recommended from our members
A vision-based method for on-road truck height measurement in proactive prevention of collision with overpasses and tunnels
This is the accepted manuscript. The final version is available from Elsevier at http://www.sciencedirect.com/science/article/pii/S0926580514002167.Over-height trucks are continuously striking low clearance overpasses and tunnels. This has led to significant damage, fatalities, and inconvenience to the public. Smart systems can automatically detect and warn oversize trucks, and have been introduced to provide the trucks with the opportunity to avoid a collision. However, high cost of implementing these systems remains a bottleneck for their wide adoption. This paper evaluates the feasibility of using computer vision to detect over-height trucks. In the proposed method, video streams are collected from a surveillance camera attached on the overpass/tunnel, and processed to measure truck heights. The height is measured using line detection and blob tracking which locate upper and lower points of a truck in pixel coordinates. The pixel coordinates are then translated into 3D world coordinates. Proof-of-concept experiment results signify the high performance of the proposed method and its potential in achieving cost-effective monitoring of over-height trucks in the transportation system. The limitations and considerations of the method for field implementation are also discussed.This material is based upon work supported by West Virginia University, Myongji University, and University of Cambridge
- …