130,606 research outputs found

    Sliding Window for Radial Basis Function Neural Network Face Detection

    Get PDF
    This paper present a Radial Basis Function Neural Network (RBFNN) face detection using sliding windows. The system will detect faces in a large image where sliding window will run inside the image and identified whether there is a face inside the current window. Face detection is the first step in face recognition system. The purpose is to localize and extract the face region from the background that will be fed into the face recognition system for identification. General preprocessing approach was used for normalizing the image and a Radial Basis Function (RBF) Neural Network was used to distinguish between face and non-face images. RBFNN offer several advantages compared to other neural network architecture such as they can be trained using fast two stages training algorithm and the network possesses the property of best approximation. The output of the network can be optimized by setting suitable values of the center and spread of the RBF. In this paper, a uniform fixed spread value will be used. The performance of the system will be based on the rate of detection and also false negative rate

    Face Detection Using Radial Basis Function Neural Networks with Fixed Spread Value

    Get PDF
    This paper present a face detection system using Radial Basis Function Neural Networks With Fixed Spread Value. Face detection is the first step in face recognition system. The purpose is to localize and extract the face region from the background that will be fed into the face recognition system for identification. General preprocessing approach was used for normalizing the image and a Radial Basis Function (RBF) Neural Network was used to distinguish between face and non-face images. RBF Neural Networks offer several advantages compared to other neural network architecture such as they can be trained using fast two stages training algorithm and the network possesses the property of best approximation. The output of the network can be optimized by setting suitable values of the center and spread of the RBF. In this paper, a uniform fixed spread value will be used. The performance of the RBFNN face detection system will be based on the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) criteria. In this research, the best setting for RBF face detection were summarized into one table where by using center 200 and spread 4 gives the highest detection rate and the lowest FAR as well as FRR. But for detecting many faces in a single image, center 200 and spread 5 is the best setting as the system can detect all faces in the image

    Eye Detection Using Wavelets and ANN

    Get PDF
    A Biometric system provides perfect identification of individual based on a unique biological feature or characteristic possessed by a person such as finger print, hand writing, heart beat, face recognition and eye detection. Among them eye detection is a better approach since Human Eye does not change throughout the life of an individual. It is regarded as the most reliable and accurate biometric identification system available. In our project we are going to develop a system for ‘eye detection using wavelets and ANN’ with software simulation package such as matlab 7.0 tool box in order to verify the uniqueness of the human eyes and its performance as a biometric. Eye detection involves first extracting the eye from a digital face image, and then encoding the unique patterns of the eye in such a way that they can be compared with preregistered eye patterns. The eye detection system consists of an automatic segmentation system that is based on the wavelet transform, and then the Wavelet analysis is used as a pre-processor for a back propagation neural network with conjugate gradient learning. The inputs to the neural network are the wavelet maxima neighborhood coefficients of face images at a particular scale. The output of the neural network is the classification of the input into an eye or non-eye region. An accuracy of 81% is observed for test images under different environment conditions not included during training

    The Effect of Overlapping Spread Value for Radial Basis Function Neural Network in Face Detection

    Get PDF
    In this paper, the effect of overlapping spread value for Radial Basis Function Neural Network (RBFNN) in face detection is presented. The reason for taking the overlapping factor into consideration is to optimize the results for using variance spread value. Face detection is the first step in face recognition system. The purpose is to localize and extract the face region from the background that will be fed into the face recognition system for identification. General preprocessing approach was used for normalizing the image and a Radial Basis Function (RBF) Neural Network was used to distinguish between face and non-face images. RBFNN offer several advantages compared to other neural network architecture such as they can be trained using fast two stages training algorithm and the network possesses the property of best approximation. The output of the network can be optimized by setting suitable values of the center and spread of the RBF. The performance of the RBFNN face detection system will be based on the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) criteri

    A Deep Learning Approach to Landmark Detection in Facial Images

    Get PDF
    In this paper an alternative approach to landmark detection using cascaded convolutional neural networks is proposed. The cascade consists of three different levels, each with a number of convolutional neural networks. After each layer of the cascade, predictions converge. This results in accurate predictions of landmark locations in facial images. The main advantage over other methods proposed in literature is the integration of face detection and landmark detection in one system. Also, the method is both able to implicitly encode local constraints and shape constraints over the entire image, thus giving it an advantage over regular non-deep learning detection methods. As such, the cascaded neural network substantially outperforms STASM, a state-of-the-art method shape model approach. However, the model does not hold well for data that is not similar to the images it has been trained on

    An Accurate Real-Time Method for Face Mask Detection using CNN and SVM

    Get PDF
    Infectious respiratory diseases, including COVID-19, pose a significant challenge to humanity and a potential threat to life due to their severity and rapid spread. Using a surgical mask is among the most significant safety precautions that can help keep this sort of pandemic from spreading, and manual monitoring of large crowds in public places for face masks is problematic. In this research, we suggest a real-time approach for face mask detection. First, we use a multi-scale deep neural network to extract features. As a result, the attributes are better suited for training the detection system. We employ SVM post-processing in the classification stage to make the face mask detection method more robust. According to the experimental findings, our strategy considerably decreased the percentage of false positives and undetected cases

    Learning Convolutional Neural Network For Face Verification

    Get PDF
    Convolutional neural networks (ConvNet) have improved the state of the art in many applications. Face recognition tasks, for example, have seen a significantly improved performance due to ConvNets. However, less attention has been given to video-based face recognition. Here, we make three contributions along these lines. First, we proposed a ConvNet-based system for long-term face tracking from videos. Through taking advantage of pre-trained deep learning models on big data, we developed a novel system for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we presented a Detection-Verification-Tracking method (DVT) which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An online trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an online trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the query person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is tested on many other types of videos and shows very promising results. Secondly, as the availability of large-scale training dataset has a significant effect on the performance of ConvNet-based recognition methods, we presented a successful automatic video collection approach to generate a large-scale video training dataset. We designed a procedure for generating a face verification dataset from videos based on the long-term face tracking algorithm, DVT. In this procedure, the streams can be collected from videos, and labeled automatically without human annotation intervention. Using this procedure, we assembled a widely scalable dataset, FaceSequence. FaceSequence includes 1.5M streams capturing ~500K individuals. A key distinction between this dataset and the existing video datasets is that FaceSequence is generated from publicly available videos and labeled automatically, hence widely scalable at no annotation cost. Lastly, we introduced a stream-based ConvNet architecture for video face verification task. The proposed network is designed to optimize the differentiable error function, referred to as stream loss, using unlabeled temporal face sequences. Using the unlabeled video dataset, FaceSequence, we trained our network to minimize the stream loss. The network achieves verification accuracy comparable to the state of the art on the LFW and YTF datasets with much smaller model complexity. In comparison to VGG, our method demonstrates a significant improvement in TAR/FAR, considering the fact that the VGG dataset is highly puried and includes a small label noise. We also fine-tuned the network using the IJB-A dataset. The validation results show competitive verifiation accuracy compared with the best previous video face verification results
    corecore