15 research outputs found

    Klasifikasi Wajah Manusia Menggunakan Multi Layer Perceptron

    Get PDF
    The problem of data security at a time when it is needed in the world of technology. The use of biometrics as data security is very necessary. This study aims to detect human biometrics using the Kinect sensor. The biometric that is detected is the face. The face image is captured by the Kinect sensor. For data feature extraction using Gray Level Co-Occurrence Matrix (GLCM. The parameters used are Contrast, Energy, Homogenity, and Correlation. The data obtained will be classified using Multi Layer Perceptron. Face classification is based on race. There are 3 races studied namely Indonesian, Chinese and African Native Races. The total data used are 100 photos of faces. The classification results show an accuracy of 86.7% using Multi Layer PerceptronPermasalahan keamanan data pada saat sangat dibutuhkan dalam dunia teknologi. Penggunaan biometrik sebagai pengamanan data sangat diperlukan. Penelitian ini bertujuan untuk mendeteksi biometrik manusia menggunakan sensor Kinect. Adapun biometric yang dideteksi adalah wajah. Hasil citra wajah ditangkap oleh sensor Kinect. Untuk ekstraksi fitur data menggunakan Gray Level Co-Occurrence Matrix (GLCM. Adapun parameter yang digunakan adalah Contrast, Energy, Homogenity, dan Correlation. Data yang diperoleh akan diklasifikasikan menggunakan Multi Layer Perceptron. Pengklasifikasian wajah dilakukan berdasarkan ras. Terdapat 3 ras yang di teliti yaitu Ras Asli Indonesia, Chinese dan Afrika. Total data yang digunakan sebanyak 100 foto wajah. Hasil klasifikasi menunjukkan akurasi sebesar 86,7 % menggunakan Multi Layer Perceptro

    Novel approach to enhance face recognition using depth maps

    Get PDF
    Face recognition, although being a popular area of research and study, still has many challenges, and with the appearance of the Microsoft Kinect device, new possibilities of research were uncovered, one of which is face recognition using the Kinect. With the goal of enhancing face recognition, this paper is aiming to prove how depth maps, since not effected by illumination, can improve face recognition with a benchmark algorithm based on the Eigenface. This required some experiments to be carried out, mainly in order to check if algorithms created to recognize faces using normal images can be as effective if not more effective with depth map images. The OpenCV Eigenface algorithm implementation was used for the purpose of training and testing both normal and depth-map images. Finally, results of the experiments are presented to prove the ability of the tested algorithm to function with depth maps, also, proving the capability of depth maps face recognition’s task in poor illumination

    Human Pose Estimation on Privacy-Preserving Low-Resolution Depth Images

    Full text link
    Human pose estimation (HPE) is a key building block for developing AI-based context-aware systems inside the operating room (OR). The 24/7 use of images coming from cameras mounted on the OR ceiling can however raise concerns for privacy, even in the case of depth images captured by RGB-D sensors. Being able to solely use low-resolution privacy-preserving images would address these concerns and help scale up the computer-assisted approaches that rely on such data to a larger number of ORs. In this paper, we introduce the problem of HPE on low-resolution depth images and propose an end-to-end solution that integrates a multi-scale super-resolution network with a 2D human pose estimation network. By exploiting intermediate feature-maps generated at different super-resolution, our approach achieves body pose results on low-resolution images (of size 64x48) that are on par with those of an approach trained and tested on full resolution images (of size 640x480).Comment: Published at MICCAI-201

    Face RGB-D Data Acquisition System Architecture for 3D Face Identification Technology

    Get PDF
    The three-dimensional approach in face identification technology had gained prominent significance as the state-of-the-art breakthrough due to its ability to address the currently developing issues of identification technology (illumination, deformation and pose variance). Consequently, this trend is also followed by rapid development of the three-dimensional face identification architectures in which some of them, namely Microsoft Kinect and Intel RealSense, have become somewhat today's standard because of its popularity. However, these architectures may not be the most accessible to all due to its limited customisation nature being a commercial product. This research aims to propose an architecture as an alternative to the pre-existing ones which allows user to fully customise the RGB-D data by involving open source components, and serving as a less power demanding architecture. The architecture integrates Microsoft LifeCam and Structure Sensor as the input components and other open source libraries which are OpenCV and Point Cloud Library (PCL). The result shows that the proposed architecture can successfully perform the intended tasks such as extracting face RGB-D data and selecting out region of interest in the face area

    NIRFaceNet: A Convolutional Neural Network for Near-Infrared Face Identification

    Get PDF
    Near-infrared (NIR) face recognition has attracted increasing attention because of its advantage of illumination invariance. However, traditional face recognition methods based on NIR are designed for and tested in cooperative-user applications. In this paper, we present a convolutional neural network (CNN) for NIR face recognition (specifically face identification) in non-cooperative-user applications. The proposed NIRFaceNet is modified from GoogLeNet, but has a more compact structure designed specifically for the Chinese Academy of Sciences Institute of Automation (CASIA) NIR database and can achieve higher identification rates with less training time and less processing time. The experimental results demonstrate that NIRFaceNet has an overall advantage compared to other methods in the NIR face recognition domain when image blur and noise are present. The performance suggests that the proposed NIRFaceNet method may be more suitable for non-cooperative-user applications

    Face recognition enhancement through the use of depth maps and deep learning

    Get PDF
    Face recognition, although being a popular area of research for over a decade has still many open research challenges. Some of these challenges include the recognition of poorly illuminated faces, recognition under pose variations and also the challenge of capturing sufficient training data to enable recognition under pose/viewpoint changes. With the appearance of cheap and effective multimodal image capture hardware, such as the Microsoft Kinect device, new possibilities of research have been uncovered. One opportunity is to explore the potential use of the depth maps generated by the Kinect as an additional data source to recognize human faces under low levels of scene illumination, and to generate new images through creating a 3D model using the depth maps and visible-spectrum / RGB images that can then be used to enhance face recognition accuracy by improving the training phase of a classification task.. With the goal of enhancing face recognition, this research first investigated how depth maps, since not affected by illumination, can improve face recognition, if algorithms traditionally used in face recognition were used. To this effect a number of popular benchmark face recognition algorithms are tested. It is proved that algorithms based on LBP and Eigenfaces are able to provide high level of accuracy in face recognition due to the significantly high resolution of the depth map images generated by the latest version of the Kinect device. To complement this work a novel algorithm named the Dense Feature Detector is presented and is proven to be effective in face recognition using depth map images, in particular under wellilluminated conditions. Another technique that was presented for the goal of enhancing face recognition is to be able to reconstruct face images in different angles, through the use of the data of one frontal RGB image and the corresponding depth map captured by the Kinect, using faster and effective 3D object reconstruction technique. Using the Overfeat network based on Convolutional Neural Networks for feature extraction and a SVM for classification it is shown that a technically unlimited number of multiple views can be created from the proposed 3D model that consists features of the face if captured real at similar angles. Thus these images can be used as real training images, thus removing the need to capture many examples of a facial image from different viewpoints for the training of the image classifier. Thus the proposed 3D model will save significant amount of time and effort in capturing sufficient training data that is essential in recognition of the human face under variations of pose/viewpoint. The thesis argues that the same approach can also be used as a novel approach to face recognition, which promises significantly high levels of face recognition accuracy base on depth images. Finally following the recent trends in replacing traditional face recognition algorithms with the effective use of deep learning networks, the thesis investigates the use of four popular networks, VGG-16, VGG-19, VGG-S and GoogLeNet in depth maps based face recognition and proposes the effective use of Transfer Learning to enhance the performance of such Deep Learning networks
    corecore