14 research outputs found

    Pengenalan Citra Wajah Frontal Menggunakan Hirarikal Klaster berbasis Deep Learning Inception V3

    Get PDF
    Pengenalan Citra wajah merupakan salah satu topik penelitian yang paling banyak dilakukan dalam satu dekade terakhir ini, salah satu yang menarik pada penelitian ini adalah citra wajah memiliki keunikan khusus dan menjadi salah satu kunci utama dalam kaitannya dengan keamanan. Penelitian ini mencoba mengenalkan kombinasi beberapa metode untuk mendapatkan tingkat akurasi dan presisi tinggi. Data citra wajah frontal yang digunakan mengambil dari sumber data terbuka milik University of York (UoY) yang dapat diperoleh melalui situs mereka. Data berupa berkas BMP ini perlu dikonversi menjadi JPG pada tahap awal. Metode penelitian ini menggunakan inception v3 dari deep learning untuk memperoleh 2048 fitur citra wajah dari setiap berkas citra wajah frontal. Setiap fitur citra wajah yang dihasilkan kemudian dilakukan pelabelan menggunakan metode jarak dan kesamaan cosinus dan terakhir dilakukan pendeteksian citra wajah frontal tersebut dengan melakukan penggolongan melalui hierarki klaster. Dari hasil percobaan pada penelitian ini yang melibatkan 400 citra wajah dengan rincian 40 orang dan setiap orang memiliki 10 pose citra wajah frontal, maka diperoleh hasil akurasi dan presisi diatas 99%

    Bosphorus database for 3d face analysis

    Get PDF
    Abstract. A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face occlusions are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis. 1

    Automated Students Attendance System

    Get PDF
    The Automated Students' Attendance System is a system that takes the attendance of students in a class automatically. The system aims to improve the current attendance system that is done manually. This work presents the computerized system of automated students' attendance system to implement genetic algorithms in a face recognition system. The extraction of face template particularly the T-zone (symmetrical between the eyes, nose and mouth) is performed based on face detection using specific HSV colour space ranges followed by template matching. Two types of templates are used; one on edge detection and another on the intensity plane in YIQ colour space. Face recognition with genetic algorithms will be performed to achieve an automated students' attendance system. With the existence of this attendance system, the occurrence of truancy could be reduced tremendously

    State-of-the-Art in 3D Face Reconstruction from a Single RGB Image

    Get PDF
    Since diverse and complex emotions need to be expressed by different facial deformation and appearances, facial animation has become a serious and on-going challenge for computer animation industry. Face reconstruction techniques based on 3D morphable face model and deep learning provide one effective solution to reuse existing databases and create believable animation of new characters from images or videos in seconds, which greatly reduce heavy manual operations and a lot of time. In this paper, we review the databases and state-of-the-art methods of 3D face reconstruction from a single RGB image. First, we classify 3D reconstruction methods into three categories and review each of them. These three categories are: Shape-from-Shading (SFS), 3D Morphable Face Model (3DMM), and Deep Learning (DL) based 3D face reconstruction. Next, we introduce existing 2D and 3D facial databases. After that, we review 10 methods of deep learning-based 3D face reconstruction and evaluate four representative ones among them. Finally, we draw conclusions of this paper and discuss future research directions

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Automated Students Attendance System

    Get PDF
    The Automated Students' Attendance System is a system that takes the attendance of students in a class automatically. The system aims to improve the current attendance system that is done manually. This work presents the computerized system of automated students' attendance system to implement genetic algorithms in a face recognition system. The extraction of face template particularly the T-zone (symmetrical between the eyes, nose and mouth) is performed based on face detection using specific HSV colour space ranges followed by template matching. Two types of templates are used; one on edge detection and another on the intensity plane in YIQ colour space. Face recognition with genetic algorithms will be performed to achieve an automated students' attendance system. With the existence of this attendance system, the occurrence of truancy could be reduced tremendously

    Yüz anotomisine dayalı ifade tanıma

    Get PDF
    Literatürde sunulan geometriye dayalı yüz ifadesi tanıma algoritmaları çoğunlukla araştırmacılar tarafından seçilen nirengi noktalarının devinimlerine veya yüz ifadesi kodlama sistemi (FACS) tarafından tanımlanan eylem birimlerinin etkinlik derecelerine odaklanır. Her iki yaklaşımda da nirengi noktaları, ifadenin en yoğun gözlemlendiği dudak, burun kenarları ve alın üzerinde konumlandırılır. Farklı kas etkinlikleri, birden fazla kasın etki alanında bulunan bu nirengi noktaları üzerinde benzer devinimlere neden olurlar. Bu nedenle, karmaşık ifadelerin belli noktalara konulan, sınırlı sayıdaki nirengi ile analizi oldukça zordur. Bu projede, yüz üzerinde kas etkinlik alanlarına dağıtılmış çok sayıda nirengi nokta-sının yüz ifadesinin oluşturulması sürecinde izlenmesi ile kas etkinlik derecelerinin belirlenmesini önerdik. Önerdiğimiz yüz ifadesi tanıma algoritması altı aşama içerir; (1) yüz modelinin deneğin yüzüne uyarlanması, (2) herhangi bir kasın etki alanında bulunan tüm nirengi noktalarının imge dizisinin ardışık çerçevelerinde izlenmesi, (3) baş yöneliminin belirlenmesi ve yüz modelinin imge üzerinde gözlemlenen yüz ile hizalanması, (4) yüze ait nirengi noktalarının deviniminden yola çıkarak model düğümlerinin yeni koordinatlarının kestirimi, (5) düğüm devinimlerinin kas kuvvetleri için çözülmesi, ve (6) elde edilen kas kuvvetleri ile yüz ifadesi sınıflandırılmasının yapılması. Algoritmamız, modelin yüze uyarlanması aşamasında yüz imgesi üzerinde nirengi noktalarının seçilmesi haricinde tamamen otomatiktir. Kas etkinliğine dayalı bu öznitelikleri temel ve belirsiz ifadelerin sınıflandırılması problemlerinde sınadık. Yedi adet temel yüz ifadesi üzerinde SVM sınıflandırıcısı ile %76 oranında başarı elde ettik. Bu oran, insanların ifade tanımadaki yetkinliklerine yakındır. Yedi temel ifadenin belirsiz gözlemlendiği çerçevelerde en yüksek başarıyı yine SVM sınıflandırıcısı ile %55 olarak elde ettik. Bu başarım, kas kuvvetlerinin genellikle hafif ve ani görülen istemsiz ifadelerin seziminde de başarılı olabileceğini göstermektedir. Kas kuvvetleri, yüz ifadesinin oluşturulmasındaki temel fiziksel gerçekliği yansıtan özniteliklerdir. Kas etkinliklerinin hassasiyetle kestirimi, belirsiz ifade değişikliklerinin sezimini sağladığı gibi, karmaşık yüz ifadelerinin sınıflandırılmalarını kolaylaştıracaktır. Ek olarak, araştırmacılar veya uzmanlar tarafından seçilen nirengi devinimleri ile kısıtlı kal-mayan bu yaklaşım, duygular ve yüz ifadeleri arasında bilinmeyen bağıntıların ortaya çıkarılmasını sağlayabilecektir.The geometric approaches to facial expression recognition commonly focus on the displa-cement of feature points that are selected by the researchers or the action units that aredefined by the facial action coding system (FACS). In both approaches the feature pointsare carefully located on lips, nose and the forehead, where an expression is observed at itsfull strength. Since these regions are under the influence of multiple muscles, distinct mus-cular activities could result in similar displacements of the feature points. Hence, analysisof complex expressions through a set of specific feature points is quite difficult.In this project we propose to extract the facial muscle activity levels through multiplepoints distributed over the muscular regions of influence. The proposed algorithm consistsof; (1) semi–automatic customization of the face model to a subject, (2) identification andtracking of facial features that reside in the region of influence of a muscle, (3) estimationof head orientation and alignment of the face model with the observed face, (4) estima-tion of relative displacements of vertices that produce facial expressions, (5) solving vertexdisplacements to obtain muscle forces, and (6) classification of facial expression with themuscle force features. Our algorithm requires manual intervention only in the stage ofmodel customization.We demonstrate the representative power of the proposed muscle–based features onclassification problems of seven basic and subtle expressions. The best performance onthe classification problem of basic expressions was 76%, obtained by use of SVM. Thisresult is close to the performance of humans in facial expression recognition. Our bestperformance for classification of seven subtle expressions was %55, once again by use ofSVM. This figure implies that muscle–based features are good candidates for involuntaryexpressions, which are often subtle and instantaneous.Muscle forces can be considered as the ultimate base functions that anatomicallycompose all expressions. Increased reliability in extraction of muscle forces will enabledetection and classification of subtle and complex expressions with higher precision. Mo-reover, the proposed algorithm may be used to reveal unknown mechanisms of emotionsand expressions as it is not limited to a predefined set of heuristic features.TÜBİTA

    Reconhecimento de expressões faciais compostas em imagens 3D : ambiente forçado vs ambiente espontâneo

    Get PDF
    Orientadora: Profa. Dra. Olga Regina Pereira BellonDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 16/12/2017Inclui referências: p.56-60Área de concentração: Ciência da ComputaçãoResumo: Neste trabalho, realiza-se o reconhecimento de Expressões Faciais Compostas (EFCs), em imagens 3D, nos ambientes de captura: forçado e espontâneo. Explora-se assim, uma moderna categorização de expressões faciais, diferente das expressões faciais básicas, por ser construída pela combinação de duas expressões básicas. A pesquisa se orienta através da utilização de imagens 3D por conta de suas vantagens intrínsecas: não apresentam problemas decorrentes de variações de pose, iluminação e de outras mudanças na aparência facial. Consideram-se dois ambientes de captura de expressões: forçado (quando o sujeito é instruído para realizar a expressão) e espontâneo (quando o sujeito produz a expressão por meio de estímulos). Isto, com a intenção de comparar o comportamento dos dois em relação ao reconhecimento de EFCs, já que, diferem em várias dimensões, incluindo dentre elas: complexidade, temporalidade e intensidade. Por fim, propõe-se um método para reconhecer EFCs. O método em questão representa uma nova aplicação de detectores de movimentos dos músculos faciais já existentes. Esses movimentos faciais detectar são denotados no sistema de codificação de ação facial (FACS) como Unidades de Ação (AUs). Consequentemente, implementam-se detectores de AUs em imagens 3D baseados em padrões binários locais de profundidade (LDBP). Posteriormente, o método foi aplicado em duas bases de dados públicas com imagens 3D: Bosphorus (ambiente forçado) e BP4D-Spontaneus (ambiente espontâneo). Nota-se que o método desenvolvido não diferencia as EFCs que apresentam a mesma configuração de AUs, sendo estas: "felicidade com nojo", "horror" e "impressão", por conseguinte, considera-se essas expressões como um "caso especial". Portanto, ponderaram-se 14 EFCs, mais o "caso especial" e imagens sem EFCs. Resultados obtidos evidenciam a existência de EFCs em imagens 3D, das quais aproveitaramse algumas características. Além disso, notou-se que o ambiente espontâneo, teve melhor comportamento em reconhecer EFCs tanto pelas AUs anotadas na base, quanto pelas AUs detectadas automaticamente; reconhecendo mais casos de EFCs e com melhor desempenho. Pelo nosso conhecimento, esta é a primeira vez que EFCs são investigadas em imagens 3D. Palavras-chave: Expressões faciais compostas, FACS, Detecção de AUs, Ambiente forçado, Ambiente espontâneo.Abstract: The following research investigates Compound Facial Expressions (EFCs) in 3D images captured in the domains: forced and spontaneous. The work explores a modern categorization of facial expressions, different than basic facial expressions, but constructed by the combination of two basic categories of emotion. The investigation used 3D images because of their intrinsic advantages: they do not present problems due to variations in pose, lighting and other changes in facial appearance. For this purpose, this research considers both forced (when the subject is instructed to perform the expression) and spontaneous (when the subject produces the expression by means of stimuli) expression caption domains. This has the intention of comparing the behavior of both domains by analyzing the recognition of EFCs, because they differ in many dimentions, including complexity, time and intensity. Finally, a method for EFCs recognition is proposed. The method in question represents a new application of existing detectors of facial muscle movements. These facial movimentes to detect are denoted in the Facial Action Coding System (FACS) as Action Units (AUs). Consequently, 3D Facial AUs Detectors are developed based on Local Depth Binary Patterns (LDBP). Subsequently, the method was applied to two public databases with 3D images: Bosphorus (forced domain) and BP4D-Spontaneous (spontaneous domain). Note that the developed method does not differentiate the EFCs that present the same AU configuration: "sadly disgusted", "appalled" and "hateful", therefore, these expressions are considered a "special case". Thus, 14 EFCs are observed, plus the "special case" and the non-EFCs images. The results confirm the existence of EFCs in 3D images, from which some characteristics were exploit. In addition, noticed that the spontaneous environment was better at recognizing EFCs by the AUs annotated at the database and by the AUs detected; recognizing more cases of EFCs and with better performance. From our best knowledge, this is the first time that EFCs are explored for 3D images. Keywords: Coumpound facial expression, FACS, AUs detection, posed domain, spontaneous domain

    {3D} Morphable Face Models -- Past, Present and Future

    No full text
    In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications
    corecore