12 research outputs found

    A Novel Approach to Face Recognition using Image Segmentation based on SPCA-KNN Method

    Get PDF
    In this paper we propose a novel method for face recognition using hybrid SPCA-KNN (SIFT-PCA-KNN) approach. The proposed method consists of three parts. The first part is based on preprocessing face images using Graph Based algorithm and SIFT (Scale Invariant Feature Transform) descriptor. Graph Based topology is used for matching two face images. In the second part eigen values and eigen vectors are extracted from each input face images. The goal is to extract the important information from the face data, to represent it as a set of new orthogonal variables called principal components. In the final part a nearest neighbor classifier is designed for classifying the face images based on the SPCA-KNN algorithm. The algorithm has been tested on 100 different subjects (15 images for each class). The experimental result shows that the proposed method has a positive effect on overall face recognition performance and outperforms other examined methods

    Various Approaches of Support vector Machines and combined Classifiers in Face Recognition

    Get PDF
    In this paper we present the various approaches used in face recognition from 2001-2012.because in last decade face recognition is using in many fields like Security sectors, identity authentication. Today we need correct and speedy performance in face recognition. This time the face recognition technology is in matured stage because research is conducting continuously in this field. Some extensions of Support vector machine (SVM) is reviewed that gives amazing performance in face recognition.Here we also review some papers of combined classifier approaches that is also a dynamic research area in a pattern recognition

    Enhancing Face Recognition through Dimensionality Reduction Techniques and Diverse Classifiers

    Get PDF
    Face recognition is essential component of various applications including computer vision, security systems and biometrics. By examining the efficacy of several dimensionality reduction techniques, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Singular Value Decomposition (SVD), and Non-Negative Matrix Factorization (NMF), this paper offers a novel approach to face recognition. These techniques are combined with diverse classifiers, including Support Vector Machines (SVM), Random Forest, LightGBM, and k-Nearest Neighbors (KNN), are employed to evaluate their impact on face recognition accuracy. Experiments were conducted on Olivetti faces data set. We have demonstrated the comparative analysis of different dimensionality reduction techniques classifiers in terms of accuracy, precision, recall, f1score. Results shows the potential of integrating PCA with diverse classification models to enhance face recognition accuracy and highlights its applicability in real-world scenarios.&nbsp

    Face Recognition using Deep Learning and TensorFlow framework

    Get PDF
    Detecting human faces and recognizing faces and facial expressions have always been an area of interest for different applications such as games, utilities and even security. With the advancement of machine learning, the techniques of detection and recognition have become more accurate and precise than ever before. However, machine learning remains a relatively complex field that could feel intimidating or inaccessible to many of us. Luckily, in the last couple of years, several organizations and open-source communities have been developing tools and libraries that help abstract the complex mathematical algorithms in order to encourage developers to easily create learning models and train them using any programming languages. As part of this project, we will create a Face Detection framework in Python built on top of the work of several open-source projects and models with the hope to reduce the entry barrier for developers and to encourage them to focus more on developing innovative applications that make use of face detection and recognition

    Quantum Face Recognition with Multi-Gate Quantum Convolutional Neural Network

    Get PDF
    In the last decade, quantum computing has showcased its unique mechanism across diverse fields, highlighting significant potential for data-driven applications requiring substantial computational resources. Within this landscape, quantum machine learning emerges as a promising frontier, poised to harness the unique advantages of quantum computing for machine learning tasks. Nonetheless, the current generation of quantum hardware, typified by noisy intermediatescale quantum (NISQ) devices, grapples with severe resource constraints, particularly in terms of qubit availability. While quantum computing offers tantalizing capabilities such as superposition and entanglement, which can be strategically leveraged to optimize the performance of quantum neural networks, the challenge remains in mitigating the resource limitations while upholding high recognition accuracy. To address this imperative, we introduce a pioneering face recognition method christened the Multi-Gate Quantum Convolutional Neural Network (MG-QCNN). This innovation is engineered to surmount the resource bottleneck endemic to NISQ devices while preserving exceptional recognition accuracy. Our empirical investigations conducted on benchmark datasets, including the Yale face dataset and the ORL face database, illuminate the remarkable potential of this approach. Specifically, our proposed variational quantum circuit architecture consistently achieves an impressive average accuracy of 96%, which is better than the 95% of the classic CNN. Our model underscores the efficacy of quantum convolution operations in the extraction of feature maps, exhibiting a transformative stride toward unlocking the full potential of quantum-enhanced face recognition, and compared with other quantum models, our method has more advantages in accuracy and efficiency

    FACE CLASSIFICATION FOR AUTHENTICATION APPROACH BY USING WAVELET TRANSFORM AND STATISTICAL FEATURES SELECTION

    Get PDF
    This thesis consists of three parts: face localization, features selection and classification process. Three methods were proposed to locate the face region in the input image. Two of them based on pattern (template) Matching Approach, and the other based on clustering approach. Five datasets of faces namely: YALE database, MIT-CBCL database, Indian database, BioID database and Caltech database were used to evaluate the proposed methods. For the first method, the template image is prepared previously by using a set of faces. Later, the input image is enhanced by applying n-means kernel to decrease the image noise. Then Normalized Correlation (NC) is used to measure the correlation coefficients between the template image and the input image regions. For the second method, instead of using n-means kernel, an optimized metrics are used to measure the difference between the template image and the input image regions. In the last method, the Modified K-Means Algorithm was used to remove the non-face regions in the input image. The above-mentioned three methods showed accuracy of localization between 98% and 100% comparing with the existed methods. In the second part of the thesis, Discrete Wavelet Transform (DWT) utilized to transform the input image into number of wavelet coefficients. Then, the coefficients of weak statistical energy less than certain threshold were removed, and resulted in decreasing the primary wavelet coefficients number up to 98% out of the total coefficients. Later, only 40% statistical features were extracted from the hight energy features by using the variance modified metric. During the experimental (ORL) Dataset was used to test the proposed statistical method. Finally, Cluster-K-Nearest Neighbor (C-K-NN) was proposed to classify the input face based on the training faces images. The results showed a significant improvement of 99.39% in the ORL dataset and 100% in the Face94 dataset classification accuracy. Moreover, a new metrics were introduced to quantify the exactness of classification and some errors of the classification can be corrected. All the above experiments were implemented in MATLAB environment

    Dynamic texture analysis in video with application to flame, smoke and volatile organic compound vapor detection

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 74-82.Dynamic textures are moving image sequences that exhibit stationary characteristics in time such as fire, smoke, volatile organic compound (VOC) plumes, waves, etc. Most surveillance applications already have motion detection and recognition capability, but dynamic texture detection algorithms are not integral part of these applications. In this thesis, image processing based algorithms for detection of specific dynamic textures are developed. Our methods can be developed in practical surveillance applications to detect VOC leaks, fire and smoke. The method developed for VOC emission detection in infrared videos uses a change detection algorithm to find the rising VOC plume. The rising characteristic of the plume is detected using a hidden Markov model (HMM). The dark regions that are formed on the leaking equipment are found using a background subtraction algorithm. Another method is developed based on an active learning algorithm that is used to detect wild fires at night and close range flames. The active learning algorithm is based on the Least-Mean-Square (LMS) method. Decisions from the sub-algorithms, each of which characterize a certain property of the texture to be detected, are combined using the LMS algorithm to reach a final decision. Another image processing method is developed to detect fire and smoke from moving camera video sequences. The global motion of the camera is compensated by finding an affine transformation between the frames using optical flow and RANSAC. Three frame change detection methods with motion compensation are used for fire detection with a moving camera. A background subtraction algorithm with global motion estimation is developed for smoke detection.Günay, OsmanM.S

    Reconocimiento facial mediante el Análisis de Componentes Principales (PCA)

    Get PDF
    Los sistemas de reconocimiento facial han recibido un fuerte impulso en la actualidad gracias al avance en la tecnología. Dado que este tipo de técnicas tienen muchas aplicaciones útiles en campos muy diversos como la biometría, la clasificación de imágenes o la seguridad, se han destinado muchos esfuerzos tanto económicos como científicos para tratar de mejorarlas. El proceso de reconocimiento facial se divide en dos tareas. La primera de ellas, la detección, comprende la localización de una o varias caras dentro de una imagen, ya sea una imagen fija o una secuencia de video. La segunda tarea, el reconocimiento, consiste en la comparación de la cara detectada con anterioridad con otras almacenadas previamente en una base de datos. Estos dos procesos no deben ser totalmente independientes, ya que un buen reconocimiento depende fuertemente de la previa detección, la cual está condicionada por la posición y orientación de la cara del sujeto con respecto a la cámara y las condiciones de iluminación. En este trabajo se estudia, implementa y evalúa un sistema automático de reconocimiento facial, tanto para trabajar con imágenes como con videos, además de en tiempo real. Como punto de partida se realizará un estudio de las técnicas de reconocimiento facial ya existentes en el estado del arte. Tras ese primer análisis, se seleccionará una de las técnicas analizadas para su posterior implementación en Python en un sistema capaz de detectar y reconocer rostros de personas introducidas previamente en el sistema en la fase de entrenamiento. En este caso, se implementará el método Eigenfaces, construido sobre técnicas de Análisis deComponentes Principales (PCA). Finalmente, se harán un conjunto de pruebas sobre diferentes bases de datos de imágenes para analizar y verificar los resultados obtenidos tras aplicar el algoritmo implementado.Facial recognition systems have received a strong boost at presents thanks to advances in technology. All these techniques have many useful applications and can be applied in many different areas such as biometrics, image classification or security. That’s the reason why society has invested a lot of economic and investigation efforts in improving them. Facial recognition process is divided into two tasks. The first of these, detection, comprises the location of one or more faces within an image or a video sequence. The second task, recognition, consists of the comparison of the previously detected face with others previously saved in a database. These two processes should not be totally independent, since a good recognition depends strongly on the previous detection, which is conditioned by the position and orientation of the face of the subject in respect to the camera and the lighting conditions. In this project, an automatic real time face recognition system is studied, implemented and evaluated, both to work with images and with videos. First of all, a study of facial recognition techniques in the state of the art will be carried out. Once we will have finished this first analysis, we will choose one of the analysed techniques in order to develop them. We will develop in Python a system able to detect and recognize faces from people who have been entered on the system previously in a training phase. In this case, the Eigenfaces method built over techniques of Principal Component Analysis (PCA), will be implemented. Finally, we will make some tests on different image databases to analyse and check the results got from using the algorithms developed.Universidad de Sevilla. Grado en Ingeniería de las Tecnologías de Telecomunicació

    Interpolation of Low Resolution Images for Improved Accuracy in Human Face Recognition

    Get PDF
    In a wide range of face recognition applications, such as the surveillance camera in law enforcement, it is cannot provide enough resolution of face for recognition. The first part of this research demonstrates the impact of the image resolution on the performance of the face recognition system. The performance of several holistic face recognition algorithms is evaluated for low-resolution face images. For the classification, this research uses k-nearest neighbor (k-NN) and Extreme Learning Machine-based neural network (ELM). The recognition rate of these systems is a function in the image resolution. In the second part of this research, nearest neighbor, bilinear, and bicubic interpolation techniques are applies as a preprocessing step to increase the resolution of the input image to obtain better results. The results show that increasing the image resolution using the mentioned interpolation methods improves the performance of the recognition systems considerably
    corecore