309 research outputs found

    Analysis of the effects of higher bits-per-cell iris recognition

    Get PDF
    In this paper, a modification of the iris recognition approach by Daugman (2004) is presented. Traditionally, each cell in an iris template consists of a matrix of two-bit values, extracted from a normalized iris image. An existing iris recognition system was modified, in order to allow for extraction of an arbitrary amount of bits per cell. This paper explains how the original iris recognition system works and how it was modified. Also, a statistical analysis of the impact on recognition quality is presented, which turned out to be limited

    Classification of Affective States in the Electroencephalogram

    Get PDF
    The goal of the present work is to investigate the feasibility of automatic affect recognition in the electroencephalogram (EEG) in different populations with a focus on feature validation and machine learning in order to augment brain-computer interface systems by the ability to identify and communicate the users’ inner affective state. Two in-depth studies on affect induction and classification are presented. In the first study, an auditory emotion induction paradigm that easily translates to a clinical population is introduced. Significant above chance group classification is achieved using time domain features for unpleasant vs. pleasant conditions. In the second study, data of an emotion induction paradigm for preverbal infants are investigated. Employing the machine learning framework, cross-participant classification of pleasant vs. neutral conditions is significantly above chance with balanced training data. Furthermore, the machine learning framework is applied to the publicly available physiological affect dataset DEAP for comparison of results. Based on spectral frequency features, the framework introduced outperforms results published by the authors of DEAP. The results strengthen the vision of the feasibility of a BCI that is able to identify and communicate the users’ affective state

    Das Fourierspektrum von Gesichtsbildern in Photographie und Kunst und dessen Einfluss auf die Gesichtswahrnehmung

    Get PDF
    Ästhetische gemalte Bilder haben einen Anstieg von -2 im radiär gemittelten Fourierspektrum (1/f2-Eigenschaften), ähnlich wie natürliche Szenen. Wir untersuchten, wie Künstler Gesichter, die einen anderen Anstieg besitzen, abbilden. Dafür wurden 300 gemalte ästhetische Porträts von namhaften Künstlern digitalisiert. Der Anstieg von Porträts und Gesichtsfotografien wurde ermittelt und verglichen. Unsere erste Studie zeigte, dass ästhetische gemalte Porträts 1/f2-Eigenschaften haben, die denen natürlicher Szenen ähnlich sind und sich in dieser Hinsicht deutlich von Gesichtsfotografien unterscheiden. Wir fanden Hinweise, dass Künstler ihre Abbildungen an Kodierungsmechanismen des visuellen Systems anpassen und nicht die Eigenschaften der Objekte abbilden, welche diese natürlicherweise besitzen. Ich konnte durch Manipulation des Anstiegs von Gesichtsfotos den relativen Anteil von groben und feinen Strukturen im Bild verändern. Wir untersuchten, wie das Erlernen und Erkennen unbekannter Gesichter durch Manipulation von 1/fp-Eigenschaften des Fourierspektrums beeinflusst wurde. Wir erstellten zwei Gruppen von Gesichtsfotografien mit veränderten 1/fp-Eigenschaften: Zum einen Gesichter mit steilerem Anstieg, zum anderen Gesichter mit flacherem Anstieg und 1/f2-Eigenschaften. In einem Gesichter-Lernexperiment wurden Verhaltensdaten und EEG-Korrelate der Gesichterwahrnehmung untersucht. Fotos mit steilem Anstieg konnten schlechter gelernt werden. Es zeigten sich langsamere Reaktionszeiten und verminderte neuro-physiologische Korrelate der Gesichterwahrnehmung. Im Gegensatz dazu konnten Gesichtsfotos mit flacherem Anstieg, der gemalten Porträts und natürlichen Szenen ähnlich ist, leichter gelernt werden und es fanden sich größere neurophysiologische Korrelate der Gesichterwahrnehmung

    Sonar sensor interpretation for ectogeneous robots

    Get PDF
    We have developed four generations of sonar scanning systems to automatically interpret surrounding environment. The first two are stationary 3D air-coupled ultrasound scanning systems and the last two are packaged as sensor heads for mobile robots. Template matching analysis is applied to distinguish simple indoor objects. It is conducted by comparing the tested echo with the reference echoes. Important features are then extracted and drawn in the phase plane. The computer then analyzes them and gives the best choices of the tested echoes automatically. For cylindrical objects outside, an algorithm has been presented to distinguish trees from smooth circular poles based on analysis of backscattered sonar echoes. The echo data is acquired by a mobile robot which has a 3D air-coupled ultrasound scanning system packaged as the sensor head. Four major steps are conducted. The final Average Asymmetry-Average Squared Euclidean Distance phase plane is segmented to tell a tree from a pole by the location of the data points for the objects interested. For extended objects outside, we successfully distinguished seven objects in the campus by taking a sequence scans along each object, obtaining the corresponding backscatter vs. scan angle plots, forming deformable template matching, extracting interesting feature vectors and then categorizing them in a hyper-plane. We have also successfully taught the robot to distinguish three pairs of objects outside. Multiple scans are conducted at different distances. A two-step feature extraction is conducted based on the amplitude vs. scan angle plots. The final Slope1 vs. Slope2 phase plane not only separates the rectangular objects from the corresponding cylindrical

    Face adaptation: behavioural and electrophysiological studies on the perception of eye gaze and gender

    Get PDF
    Whereas the investigation of perceptual aftereffects has a very long tradition in studies on low-level vision, the report and analysis of face-related high-level aftereffects is a very recent line of research. Webster et al. (2004, Nature) first showed a visual aftereffect on the perception of face gender by demonstrating that adaptation to male faces biased the subsequent classification of androgynous faces towards female gender. Similar adaptation effects have also been observed for one of the most important social signals: human eye gaze. Jenkins et al. (2006, Psychological Science) found that adaptation to gaze in one direction virtually eliminated participants’ ability to perceive smaller gaze deviations in the same direction. The present thesis further examined these high-level face aftereffects by determining the temporal characteristics of gaze adaptation and by analysing the neural correlates of eye gaze and gender adaptation. A behavioural study on the temporal decay of gaze adaptation effects shed further light on their basic characteristics: not only was the aftereffect surprisingly long-lasting, but its exponential decay revealed remarkable similarity with the time course of low-level adaptation effects. Further, in a series of event-related potential (ERP) studies it was found that the N170 was only marginally affected by both eye gaze and gender adaptation, whereas pronounced effects of both kinds of adaptation emerged in the P3 component with smaller amplitudes in response to test stimuli similar to the adaptation condition. Finally, gaze adaptation was found to affect ERPs in an earlier time interval ~250-350 ms which appeared to be sensitive to the discrimination between direct vs. averted gaze even when this was only an illusionary percept induced by adaptation. Together, these studies extend previous knowledge of the temporal parameters and the neural correlates of high-level face adaptation

    The Development And Application Of A Statistical Shape Model Of The Human Craniofacial Skeleton

    Get PDF
    Biomechanical investigations involving the characterization of biomaterials or improvement of implant design often employ finite element (FE) analysis. However, the contemporary method of developing a FE mesh from computed tomography scans involves much manual intervention and can be a tedious process. Researchers will often focus their efforts on creating a single highly validated FE model at the expense of incorporating variability of anatomical geometry and material properties, thus limiting the applicability of their findings. The goal of this thesis was to address this issue through the use of a statistical shape model (SSM). A SSM is a probabilistic description of the variation in the shape of a given class of object. (Additional scalar data, such as an elastic constant, can also be incorporated into the model.) By discretizing a sample (i.e. training set) of unique objects of the same class using a set of corresponding nodes, the main modes of shape variation within that shape class are discovered via principal component analysis. By combining the principal components using different linear combinations, new shape instances are created, each with its own unique geometry while retaining the characteristics of its shape class. In this thesis, FE models of the human craniofacial skeleton (CFS) were first validated to establish their viability. A mesh morphing procedure was then developed to map one mesh onto the geometry of 22 other CFS models forming a training set for a SSM of the CFS. After verifying that FE results derived from morphed meshes were no different from those obtained using meshes created with contemporary methods, a SSM of the human CFS was created, and 1000 CFS FE meshes produced. It was found that these meshes accurately described the geometric variation in human population, and were used in a Monte Carlo analysis of facial fracture, finding past studies attempting to characterize the fracture probability of the zygomatic bone are overly conservative

    3D face recognition using photometric stereo

    Get PDF
    Automatic face recognition has been an active research area for the last four decades. This thesis explores innovative bio-inspired concepts aimed at improved face recognition using surface normals. New directions in salient data representation are explored using data captured via a photometric stereo method from the University of the West of England’s “Photoface” device. Accuracy assessments demonstrate the advantage of the capture format and the synergy offered by near infrared light sources in achieving more accurate results than under conventional visible light. Two 3D face databases have been created as part of the thesis – the publicly available Photoface database which contains 3187 images of 453 subjects and the 3DE-VISIR dataset which contains 363 images of 115 people with different expressions captured simultaneously under near infrared and visible light. The Photoface database is believed to be the ?rst to capture naturalistic 3D face models. Subsets of these databases are then used to show the results of experiments inspired by the human visual system. Experimental results show that optimal recognition rates are achieved using surprisingly low resolution of only 10x10 pixels on surface normal data, which corresponds to the spatial frequency range of optimal human performance. Motivated by the observed increase in recognition speed and accuracy that occurs in humans when faces are caricatured, novel interpretations of caricaturing using outlying data and pixel locations with high variance show that performance remains disproportionately high when up to 90% of the data has been discarded. These direct methods of dimensionality reduction have useful implications for the storage and processing requirements for commercial face recognition systems. The novel variance approach is extended to recognise positive expressions with 90% accuracy which has useful implications for human-computer interaction as well as ensuring that a subject has the correct expression prior to recognition. Furthermore, the subject recognition rate is improved by removing those pixels which encode expression. Finally, preliminary work into feature detection on surface normals by extending Haar-like features is presented which is also shown to be useful for correcting the pose of the head as part of a fully operational device. The system operates with an accuracy of 98.65% at a false acceptance rate of only 0.01 on front facing heads with neutral expressions. The work has shown how new avenues of enquiry inspired by our observation of the human visual system can offer useful advantages towards achieving more robust autonomous computer-based facial recognition

    Fear Classification using Affective Computing with Physiological Information and Smart-Wearables

    Get PDF
    Mención Internacional en el título de doctorAmong the 17 Sustainable Development Goals proposed within the 2030 Agenda and adopted by all of the United Nations member states, the fifth SDG is a call for action to effectively turn gender equality into a fundamental human right and an essential foundation for a better world. It includes the eradication of all types of violence against women. Focusing on the technological perspective, the range of available solutions intended to prevent this social problem is very limited. Moreover, most of the solutions are based on a panic button approach, leaving aside the usage and integration of current state-of-the-art technologies, such as the Internet of Things (IoT), affective computing, cyber-physical systems, and smart-sensors. Thus, the main purpose of this research is to provide new insight into the design and development of tools to prevent and combat Gender-based Violence risky situations and, even, aggressions, from a technological perspective, but without leaving aside the different sociological considerations directly related to the problem. To achieve such an objective, we rely on the application of affective computing from a realist point of view, i.e. targeting the generation of systems and tools capable of being implemented and used nowadays or within an achievable time-frame. This pragmatic vision is channelled through: 1) an exhaustive study of the existing technological tools and mechanisms oriented to the fight Gender-based Violence, 2) the proposal of a new smart-wearable system intended to deal with some of the current technological encountered limitations, 3) a novel fear-related emotion classification approach to disentangle the relation between emotions and physiology, and 4) the definition and release of a new multi-modal dataset for emotion recognition in women. Firstly, different fear classification systems using a reduced set of physiological signals are explored and designed. This is done by employing open datasets together with the combination of time, frequency and non-linear domain techniques. This design process is encompassed by trade-offs between both physiological considerations and embedded capabilities. The latter is of paramount importance due to the edge-computing focus of this research. Two results are highlighted in this first task, the designed fear classification system that employed the DEAP dataset data and achieved an AUC of 81.60% and a Gmean of 81.55% on average for a subjectindependent approach, and only two physiological signals; and the designed fear classification system that employed the MAHNOB dataset data achieving an AUC of 86.00% and a Gmean of 73.78% on average for a subject-independent approach, only three physiological signals, and a Leave-One-Subject-Out configuration. A detailed comparison with other emotion recognition systems proposed in the literature is presented, which proves that the obtained metrics are in line with the state-ofthe- art. Secondly, Bindi is presented. This is an end-to-end autonomous multimodal system leveraging affective IoT throughout auditory and physiological commercial off-theshelf smart-sensors, hierarchical multisensorial fusion, and secured server architecture to combat Gender-based Violence by automatically detecting risky situations based on a multimodal intelligence engine and then triggering a protection protocol. Specifically, this research is focused onto the hardware and software design of one of the two edge-computing devices within Bindi. This is a bracelet integrating three physiological sensors, actuators, power monitoring integrated chips, and a System- On-Chip with wireless capabilities. Within this context, different embedded design space explorations are presented: embedded filtering evaluation, online physiological signal quality assessment, feature extraction, and power consumption analysis. The reported results in all these processes are successfully validated and, for some of them, even compared against physiological standard measurement equipment. Amongst the different obtained results regarding the embedded design and implementation within the bracelet of Bindi, it should be highlighted that its low power consumption provides a battery life to be approximately 40 hours when using a 500 mAh battery. Finally, the particularities of our use case and the scarcity of open multimodal datasets dealing with emotional immersive technology, labelling methodology considering the gender perspective, balanced stimuli distribution regarding the target emotions, and recovery processes based on the physiological signals of the volunteers to quantify and isolate the emotional activation between stimuli, led us to the definition and elaboration of Women and Emotion Multi-modal Affective Computing (WEMAC) dataset. This is a multimodal dataset in which 104 women who never experienced Gender-based Violence that performed different emotion-related stimuli visualisations in a laboratory environment. The previous fear binary classification systems were improved and applied to this novel multimodal dataset. For instance, the proposed multimodal fear recognition system using this dataset reports up to 60.20% and 67.59% for ACC and F1-score, respectively. These values represent a competitive result in comparison with the state-of-the-art that deal with similar multi-modal use cases. In general, this PhD thesis has opened a new research line within the research group under which it has been developed. Moreover, this work has established a solid base from which to expand knowledge and continue research targeting the generation of both mechanisms to help vulnerable groups and socially oriented technology.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: David Atienza Alonso.- Secretaria: Susana Patón Álvarez.- Vocal: Eduardo de la Torre Arnan
    corecore