5 research outputs found

    A computer based system to design expressive avatars

    Full text link
    Avatars are used in different contexts and situations: e-commerce, e-therapy, virtual worlds, videogames,collaborative online design... In this context, a good design of an avatar may improve the user experience. The ability of controlling the way an avatar convey messages and emotions is capital. In this work, a procedure to design avatar faces capable of conveying to the observer the most suitable sensations according to a given context is developed. The proposed system is based on a combination of genetic algorithms and artificial neural networks whose training is based on perceptual human responses to a set of faces.Diego-Mas, JA.; Alcaide Marzal, J. (2015). A computer based system to design expressive avatars. Computers in Human Behavior. 44:1-11. doi:10.1016/j.chb.2014.11.027S1114

    Comparing virtual vs real faces expressing emotions in children with autism: An eye-tracking study

    Get PDF
    AbstractDifficulties in processing emotional facial expressions is considered a central characteristic of children with autism spectrum condition (ASC). In addition, there is a growing interest in the use of virtual avatars capable of expressing emotions as an intervention aimed at improving the social skills of these individuals. One potential use of avatars is that they could enhance facial recognition and guide attention. However, this aspect needs further investigation. The aim of our study is to assess differences in eye gaze processes in children with ASC when they see avatar faces expressing emotions compared to real faces. Eye-tracking methodology was used to compare the performance of children with ASC between avatar and real faces. A repeated-measures general linear model was adopted to understand which characteristics of the stimuli could influence the stimuli's fixation times. Survival analysis was performed to understand differences in exploration behaviour between avatar and real faces. Differences between emotion recognition accuracy and the number of fixations were evaluated through a paired t-test. Our results confirm that children with autism have higher capacities to process and recognize emotions when these are presented by avatar faces. Children with autism are more attracted to the mouth or the eyes depending on the stimulus type (avatar or real) and the emotion expressed by the stimulus. Also, they are more attracted to avatar faces expressing negative emotions (anger and sadness), and to real faces expressing surprise. Differences were not found regarding happiness. Finally, they show a higher degree of exploration of avatar faces. All these elements, such as interest in the avatar and reduced attention to the eyes, can offer important elements in planning an efficient intervention

    Automatic classification of human facial features based on their appearance

    Full text link
    [EN] Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.Fuentes-Hurtado, F.; Diego-Mas, JA.; Naranjo Ornedo, V.; Alcañiz Raya, ML. (2019). Automatic classification of human facial features based on their appearance. PLoS ONE. 14(1):1-20. https://doi.org/10.1371/journal.pone.0211314S120141Damasio, A. R. (1985). Prosopagnosia. Trends in Neurosciences, 8, 132-135. doi:10.1016/0166-2236(85)90051-7Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77(3), 305-327. doi:10.1111/j.2044-8295.1986.tb02199.xTodorov, A. (2011). Evaluating Faces on Social Dimensions. Social Neuroscience, 54-76. doi:10.1093/acprof:oso/9780195316872.003.0004Little, A. C., Burriss, R. P., Jones, B. C., & Roberts, S. C. (2007). Facial appearance affects voting decisions. Evolution and Human Behavior, 28(1), 18-27. doi:10.1016/j.evolhumbehav.2006.09.002Porter, J. P., & Olson, K. L. (2001). Anthropometric Facial Analysis of the African American Woman. Archives of Facial Plastic Surgery, 3(3), 191-197. doi:10.1001/archfaci.3.3.191Gündüz Arslan, S., Genç, C., Odabaş, B., & Devecioğlu Kama, J. (2007). Comparison of Facial Proportions and Anthropometric Norms Among Turkish Young Adults With Different Face Types. Aesthetic Plastic Surgery, 32(2), 234-242. doi:10.1007/s00266-007-9049-yFerring, V., & Pancherz, H. (2008). Divine proportions in the growing face. American Journal of Orthodontics and Dentofacial Orthopedics, 134(4), 472-479. doi:10.1016/j.ajodo.2007.03.027Mane, D. R., Kale, A. D., Bhai, M. B., & Hallikerimath, S. (2010). Anthropometric and anthroposcopic analysis of different shapes of faces in group of Indian population: A pilot study. Journal of Forensic and Legal Medicine, 17(8), 421-425. doi:10.1016/j.jflm.2010.09.001Ritz-Timme, S., Gabriel, P., Tutkuviene, J., Poppa, P., Obertová, Z., Gibelli, D., … Cattaneo, C. (2011). Metric and morphological assessment of facial features: A study on three European populations. Forensic Science International, 207(1-3), 239.e1-239.e8. doi:10.1016/j.forsciint.2011.01.035Ritz-Timme, S., Gabriel, P., Obertovà, Z., Boguslawski, M., Mayer, F., Drabik, A., … Cattaneo, C. (2010). A new atlas for the evaluation of facial features: advantages, limits, and applicability. International Journal of Legal Medicine, 125(2), 301-306. doi:10.1007/s00414-010-0446-4Kong, S. G., Heo, J., Abidi, B. R., Paik, J., & Abidi, M. A. (2005). Recent advances in visual and infrared face recognition—a review. Computer Vision and Image Understanding, 97(1), 103-135. doi:10.1016/j.cviu.2004.04.001Tavares, G., Mourão, A., & Magalhães, J. (2016). Crowdsourcing facial expressions for affective-interaction. Computer Vision and Image Understanding, 147, 102-113. doi:10.1016/j.cviu.2016.02.001Buckingham, G., DeBruine, L. M., Little, A. C., Welling, L. L. M., Conway, C. A., Tiddeman, B. P., & Jones, B. C. (2006). Visual adaptation to masculine and feminine faces influences generalized preferences and perceptions of trustworthiness. Evolution and Human Behavior, 27(5), 381-389. doi:10.1016/j.evolhumbehav.2006.03.001Boberg M, Piippo P, Ollila E. Designing Avatars. DIMEA ‘08 Proc 3rd Int Conf Digit Interact Media Entertain Arts. ACM; 2008; 232–239. doi: https://doi.org/10.1145/1413634.1413679Rojas Q., M., Masip, D., Todorov, A., & Vitria, J. (2011). Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models. PLoS ONE, 6(8), e23323. doi:10.1371/journal.pone.0023323Laurentini, A., & Bottino, A. (2014). Computer analysis of face beauty: A survey. Computer Vision and Image Understanding, 125, 184-199. doi:10.1016/j.cviu.2014.04.006Alemany S, Gonzalez J, Nacher B, Soriano C, Arnaiz C, Heras H. Anthropometric survey of the Spanish female population aimed at the apparel industry. Proceedings of the 2010 Intl Conference on 3D Body scanning Technologies. 2010. pp. 307–315.Vinué, G., Epifanio, I., & Alemany, S. (2015). Archetypoids: A new approach to define representative archetypal data. Computational Statistics & Data Analysis, 87, 102-115. doi:10.1016/j.csda.2015.01.018Jee, S., & Yun, M. H. (2016). An anthropometric survey of Korean hand and hand shape types. International Journal of Industrial Ergonomics, 53, 10-18. doi:10.1016/j.ergon.2015.10.004Kim, N.-S., & Do, W.-H. (2014). Classification of Elderly Women’s Foot Type. Journal of the Korean Society of Clothing and Textiles, 38(3), 305-320. doi:10.5850/jksct.2014.38.3.305Sarakon P, Charoenpong T, Charoensiriwath S. Face shape classification from 3D human data by using SVM. The 7th 2014 Biomedical Engineering International Conference. IEEE; 2014. pp. 1–5. doi: https://doi.org/10.1109/BMEiCON.2014.7017382PRESTON, T. A., & SINGH, M. (1972). Redintegrated Somatotyping. Ergonomics, 15(6), 693-700. doi:10.1080/00140137208924469Lin, Y.-L., & Lee, K.-L. (1999). Investigation of anthropometry basis grouping technique for subject classification. Ergonomics, 42(10), 1311-1316. doi:10.1080/001401399184965Malousaris, G. G., Bergeles, N. K., Barzouka, K. G., Bayios, I. A., Nassis, G. P., & Koskolou, M. D. (2008). Somatotype, size and body composition of competitive female volleyball players. Journal of Science and Medicine in Sport, 11(3), 337-344. doi:10.1016/j.jsams.2006.11.008Carvalho, P. V. R., dos Santos, I. L., Gomes, J. O., Borges, M. R. S., & Guerlain, S. (2008). Human factors approach for evaluation and redesign of human–system interfaces of a nuclear power plant simulator. Displays, 29(3), 273-284. doi:10.1016/j.displa.2007.08.010Fabri M, Moore D. The use of emotionally expressive avatars in Collaborative Virtual Environments. AISB’05 Convention:Proceedings of the Joint Symposium on Virtual Social Agents: Social Presence Cues for Virtual Humanoids Empathic Interaction with Synthetic Characters Mind Minding Agents. 2005. pp. 88–94. doi:citeulike-article-id:790934Sukhija, P., Behal, S., & Singh, P. (2016). Face Recognition System Using Genetic Algorithm. Procedia Computer Science, 85, 410-417. doi:10.1016/j.procs.2016.05.183Trescak T, Bogdanovych A, Simoff S, Rodriguez I. Generating diverse ethnic groups with genetic algorithms. Proceedings of the 18th ACM symposium on Virtual reality software and technology—VRST ‘12. New York, New York, USA: ACM Press; 2012. p. 1. doi: https://doi.org/10.1145/2407336.2407338Vanezis, P., Lu, D., Cockburn, J., Gonzalez, A., McCombe, G., Trujillo, O., & Vanezis, M. (1996). Morphological Classification of Facial Features in Adult Caucasian Males Based on an Assessment of Photographs of 50 Subjects. Journal of Forensic Sciences, 41(5), 13998J. doi:10.1520/jfs13998jTamir, A. (2011). Numerical Survey of the Different Shapes of the Human Nose. Journal of Craniofacial Surgery, 22(3), 1104-1107. doi:10.1097/scs.0b013e3182108eb3Tamir, A. (2013). Numerical Survey of the Different Shapes of Human Chin. Journal of Craniofacial Surgery, 24(5), 1657-1659. doi:10.1097/scs.0b013e3182942b77Richler, J. J., Cheung, O. S., & Gauthier, I. (2011). Holistic Processing Predicts Face Recognition. Psychological Science, 22(4), 464-471. doi:10.1177/0956797611401753Taubert, J., Apthorp, D., Aagten-Murphy, D., & Alais, D. (2011). The role of holistic processing in face perception: Evidence from the face inversion effect. Vision Research, 51(11), 1273-1278. doi:10.1016/j.visres.2011.04.002Donnelly, N., & Davidoff, J. (1999). The Mental Representations of Faces and Houses: Issues Concerning Parts and Wholes. Visual Cognition, 6(3-4), 319-343. doi:10.1080/135062899395000Davidoff, J., & Donnelly, N. (1990). Object superiority: A comparison of complete and part probes. Acta Psychologica, 73(3), 225-243. doi:10.1016/0001-6918(90)90024-aTanaka, J. W., & Farah, M. J. (1993). Parts and Wholes in Face Recognition. The Quarterly Journal of Experimental Psychology Section A, 46(2), 225-245. doi:10.1080/14640749308401045Wang, R., Li, J., Fang, H., Tian, M., & Liu, J. (2012). Individual Differences in Holistic Processing Predict Face Recognition Ability. Psychological Science, 23(2), 169-177. doi:10.1177/0956797611420575Rhodes, G., Ewing, L., Hayward, W. G., Maurer, D., Mondloch, C. J., & Tanaka, J. W. (2009). Contact and other-race effects in configural and component processing of faces. British Journal of Psychology, 100(4), 717-728. doi:10.1348/000712608x396503Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101(2), 343-352. doi:10.1037/0033-295x.101.2.343Scharff, A., Palmer, J., & Moore, C. M. (2011). Evidence of fixed capacity in visual object categorization. Psychonomic Bulletin & Review, 18(4), 713-721. doi:10.3758/s13423-011-0101-1Meyers, E., & Wolf, L. (2007). Using Biologically Inspired Features for Face Processing. International Journal of Computer Vision, 76(1), 93-104. doi:10.1007/s11263-007-0058-8Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 681-685. doi:10.1109/34.927467Ahonen, T., Hadid, A., & Pietikainen, M. (2006). Face Description with Local Binary Patterns: Application to Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12), 2037-2041. doi:10.1109/tpami.2006.244Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711-720. doi:10.1109/34.598228Turk, M., & Pentland, A. (1991). Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 71-86. doi:10.1162/jocn.1991.3.1.71Klare B, Jain AK. On a taxonomy of facial features. IEEE 4th International Conference on Biometrics: Theory, Applications and Systems, BTAS 2010. IEEE; 2010. pp. 1–8. doi: https://doi.org/10.1109/BTAS.2010.5634533Chihaoui, M., Elkefi, A., Bellil, W., & Ben Amar, C. (2016). A Survey of 2D Face Recognition Techniques. Computers, 5(4), 21. doi:10.3390/computers5040021Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122-1135. doi:10.3758/s13428-014-0532-5Asthana A, Zafeiriou S, Cheng S, Pantic M. Incremental face alignment in the wild. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2014. pp. 1859–1866. doi: https://doi.org/10.1109/CVPR.2014.240Bag S, Barik S, Sen P, Sanyal G. A statistical nonparametric approach of face recognition: combination of eigenface & modified k-means clustering. Proceedings Second International Conference on Information Processing. 2008. p. 198.Doukas, C., & Maglogiannis, I. (2010). A Fast Mobile Face Recognition System for Android OS Based on Eigenfaces Decomposition. Artificial Intelligence Applications and Innovations, 295-302. doi:10.1007/978-3-642-16239-8_39Huang P, Huang Y, Wang W, Wang L. Deep embedding network for clustering. Proceedings—International Conference on Pattern Recognition. 2014. pp. 1532–1537. doi: https://doi.org/10.1109/ICPR.2014.272Dizaji KG, Herandi A, Deng C, Cai W, Huang H. Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization. Proceedings of the IEEE International Conference on Computer Vision. 2017. doi: https://doi.org/10.1109/ICCV.2017.612Xie J, Girshick R, Farhadi A. Unsupervised deep embedding for clustering analysis [Internet]. Proceedings of the 33rd International Conference on International Conference on Machine Learning—Volume 48. JMLR.org; 2016. pp. 478–487. Available: https://dl.acm.org/citation.cfm?id=3045442Nousi, P., & Tefas, A. (2017). Discriminatively Trained Autoencoders for Fast and Accurate Face Recognition. Communications in Computer and Information Science, 205-215. doi:10.1007/978-3-319-65172-9_18Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 4(3), 519. doi:10.1364/josaa.4.00051

    A system for modeling social traits in realistic faces with artificial intelligence

    Full text link
    Los seres humanos han desarrollado especialmente su capacidad perceptiva para procesar caras y extraer información de las características faciales. Usando nuestra capacidad conductual para percibir rostros, hacemos atribuciones tales como personalidad, inteligencia o confiabilidad basadas en la apariencia facial que a menudo tienen un fuerte impacto en el comportamiento social en diferentes dominios. Por lo tanto, las caras desempeñan un papel fundamental en nuestras relaciones con otras personas y en nuestras decisiones cotidianas. Con la popularización de Internet, las personas participan en muchos tipos de interacciones virtuales, desde experiencias sociales, como juegos, citas o comunidades, hasta actividades profesionales, como e-commerce, e-learning, e-therapy o e-health. Estas interacciones virtuales manifiestan la necesidad de caras que representen a las personas reales que interactúan en el mundo digital: así surgió el concepto de avatar. Los avatares se utilizan para representar a los usuarios en diferentes escenarios y ámbitos, desde la vida personal hasta situaciones profesionales. En todos estos casos, la aparición del avatar puede tener un efecto no solo en la opinión y percepción de otra persona, sino en la autopercepción, que influye en la actitud y el comportamiento del sujeto. De hecho, los avatares a menudo se emplean para obtener impresiones o emociones a través de expresiones no verbales, y pueden mejorar las interacciones en línea o incluso son útiles para fines educativos o terapéuticos. Por lo tanto, la posibilidad de generar avatares de aspecto realista que provoquen un determinado conjunto de impresiones sociales supone una herramienta muy interesante y novedosa, útil en un amplio abanico de campos. Esta tesis propone un método novedoso para generar caras de aspecto realistas con un perfil social asociado que comprende 15 impresiones diferentes. Para este propósito, se completaron varios objetivos parciales. En primer lugar, las características faciales se extrajeron de una base de datos de caras reales y se agruparon por aspecto de una manera automática y objetiva empleando técnicas de reducción de dimensionalidad y agrupamiento. Esto produjo una taxonomía que permite codificar de manera sistemática y objetiva las caras de acuerdo con los grupos obtenidos previamente. Además, el uso del método propuesto no se limita a las características faciales, y se podría extender su uso para agrupar automáticamente cualquier otro tipo de imágenes por apariencia. En segundo lugar, se encontraron las relaciones existentes entre las diferentes características faciales y las impresiones sociales. Esto ayuda a saber en qué medida una determinada característica facial influye en la percepción de una determinada impresión social, lo que permite centrarse en la característica o características más importantes al diseñar rostros con una percepción social deseada. En tercer lugar, se implementó un método de edición de imágenes para generar una cara totalmente nueva y realista a partir de una definición de rostro utilizando la taxonomía de rasgos faciales antes mencionada. Finalmente, se desarrolló un sistema para generar caras realistas con un perfil de rasgo social asociado, lo cual cumple el objetivo principal de la presente tesis. La principal novedad de este trabajo reside en la capacidad de trabajar con varias dimensiones de rasgos a la vez en caras realistas. Por lo tanto, en contraste con los trabajos anteriores que usan imágenes con ruido, o caras de dibujos animados o sintéticas, el sistema desarrollado en esta tesis permite generar caras de aspecto realista eligiendo los niveles deseados de quince impresiones: Miedo, Enfado, Atractivo, Cara de niño, Disgustado, Dominante, Femenino, Feliz, Masculino, Prototípico, Triste, Sorprendido, Amenazante, Confiable e Inusual. Los prometedores resultados obtenidos permitirán investigar más a fondo cómo modelar lHumans have specially developed their perceptual capacity to process faces and to extract information from facial features. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance that often have a strong impact on social behavior in different domains. Therefore, faces play a central role in our relationships with other people and in our everyday decisions. With the popularization of the Internet, people participate in many kinds of virtual interactions, from social experiences, such as games, dating or communities, to professional activities, such as e-commerce, e-learning, e-therapy or e-health. These virtual interactions manifest the need for faces that represent the actual people interacting in the digital world: thus the concept of avatar emerged. Avatars are used to represent users in different scenarios and scopes, from personal life to professional situations. In all these cases, the appearance of the avatar may have an effect not only on other person's opinion and perception but on self-perception, influencing the subject's own attitude and behavior. In fact, avatars are often employed to elicit impressions or emotions through non-verbal expressions, and are able to improve online interactions or even useful for education purposes or therapy. Then, being able to generate realistic looking avatars which elicit a certain set of desired social impressions poses a very interesting and novel tool, useful in a wide range of fields. This thesis proposes a novel method for generating realistic looking faces with an associated social profile comprising 15 different impressions. For this purpose, several partial objectives were accomplished. First, facial features were extracted from a database of real faces and grouped by appearance in an automatic and objective manner employing dimensionality reduction and clustering techniques. This yielded a taxonomy which allows to systematically and objectively codify faces according to the previously obtained clusters. Furthermore, the use of the proposed method is not restricted to facial features, and it should be possible to extend its use to automatically group any other kind of images by appearance. Second, the existing relationships among the different facial features and the social impressions were found. This helps to know how much a certain facial feature influences the perception of a given social impression, allowing to focus on the most important feature or features when designing faces with a sought social perception. Third, an image editing method was implemented to generate a completely new, realistic face from just a face definition using the aforementioned facial feature taxonomy. Finally, a system to generate realistic faces with an associated social trait profile was developed, which fulfills the main objective of the present thesis. The main novelty of this work resides in the ability to work with several trait dimensions at a time on realistic faces. Thus, in contrast with the previous works that use noisy images, or cartoon-like or synthetic faces, the system developed in this thesis allows to generate realistic looking faces choosing the desired levels of fifteen impressions, namely Afraid, Angry, Attractive, Babyface, Disgusted, Dominant, Feminine, Happy, Masculine, Prototypical, Sad, Surprised, Threatening, Trustworthy and Unusual. The promising results obtained in this thesis will allow to further investigate how to model social perception in faces using a completely new approach.Els sers humans han desenvolupat especialment la seua capacitat perceptiva per a processar cares i extraure informació de les característiques facials. Usant la nostra capacitat conductual per a percebre rostres, fem atribucions com ara personalitat, intel·ligència o confiabilitat basades en l'aparença facial que sovint tenen un fort impacte en el comportament social en diferents dominis. Per tant, les cares exercixen un paper fonamental en les nostres relacions amb altres persones i en les nostres decisions quotidianes. Amb la popularització d'Internet, les persones participen en molts tipus d'inter- accions virtuals, des d'experiències socials, com a jocs, cites o comunitats, fins a activitats professionals, com e-commerce, e-learning, e-therapy o e-health. Estes interaccions virtuals manifesten la necessitat de cares que representen a les persones reals que interactuen en el món digital: així va sorgir el concepte d'avatar. Els avatars s'utilitzen per a representar als usuaris en diferents escenaris i àmbits, des de la vida personal fins a situacions professionals. En tots estos casos, l'aparició de l'avatar pot tindre un efecte no sols en l'opinió i percepció d'una altra persona, sinó en l'autopercepció, que influïx en l'actitud i el comportament del subjecte. De fet, els avatars sovint s'empren per a obtindre impressions o emocions a través d'expressions no verbals, i poden millorar les interaccions en línia o inclús són útils per a fins educatius o terapèutics. Per tant, la possibilitat de generar avatars d'aspecte realista que provoquen un determinat conjunt d'impressions socials planteja una ferramenta molt interessant i nova, útil en un ampla varietat de camps. Esta tesi proposa un mètode nou per a generar cares d'aspecte realistes amb un perfil social associat que comprén 15 impressions diferents. Per a este propòsit, es van completar diversos objectius parcials. En primer lloc, les característiques facials es van extraure d'una base de dades de cares reals i es van agrupar per aspecte d'una manera automàtica i objectiva emprant tècniques de reducció de dimensionalidad i agrupament. Açò va produir una taxonomia que permet codificar de manera sistemàtica i objectiva les cares d'acord amb els grups obtinguts prèviament. A més, l'ús del mètode proposat no es limita a les característiques facials, i es podria estendre el seu ús per a agrupar automàticament qualsevol altre tipus d'imatges per aparença. En segon lloc, es van trobar les relacions existents entre les diferents característiques facials i les impressions socials. Açò ajuda a saber en quina mesura una determinada característica facial influïx en la percepció d'una determinada impressió social, la qual cosa permet centrar-se en la característica o característiques més importants al dissenyar rostres amb una percepció social desitjada. En tercer lloc, es va implementar un mètode d'edició d'imatges per a generar una cara totalment nova i realista a partir d'una definició de rostre utilitzant la taxonomia de trets facials abans mencionada. Finalment, es va desenrotllar un sistema per a generar cares realistes amb un perfil de tret social associat, la qual cosa complix l'objectiu principal de la present tesi. La principal novetat d'este treball residix en la capacitat de treballar amb diverses dimensions de trets al mateix temps en cares realistes. Per tant, en contrast amb els treballs anteriors que usen imatges amb soroll, o cares de dibuixos animats o sintètiques, el sistema desenrotllat en esta tesi permet generar cares d'aspecte realista triant els nivells desitjats de quinze impressions: Por, Enuig, Atractiu, Cara de xiquet, Disgustat, Dominant, Femení, Feliç, Masculí, Prototípic, Trist, Sorprés, Amenaçador, Confiable i Inusual. Els prometedors resultats obtinguts en esta tesi permetran investigar més a fons com modelar la percepció social en les cares utilitzant un enfocament completFuentes Hurtado, FJ. (2018). A system for modeling social traits in realistic faces with artificial intelligence [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/101943TESI

    Patient centric intervention for children with high functioning autism spectrum disorder. Can ICT solutions improve the state of the art ?

    Get PDF
    In my PhD research we developed an integrated technological platform for the acquisition of neurophysiologic signals in a semi-naturalistic setting where children are free to move around, play with different objects and interact with the examiner. The interaction with the examiner rather than with a screen is another very important feature of the present research, and allows recreating a more real situation with social interactions and cues. In this paradigm, we can assume that the signals acquired from the brain and the autonomic system, are much more similar to what is generated while the child interacts in common life situations. This setting, with a relatively simple technical implementation, can be considered as one step towards a more behaviorally driven analysis of neurophysiologic activity. Within the context of a pilot open trial, we showed the feasibility of the technological platform applied to the classical intervention solutions for the autism. We found that (1) the platform was useful during both children-therapist interaction at hospital as well as children-parents interaction at home, (2) tailored intervention was compatible with at home use and non-professional therapist/parents. Going back to the title of my thesis: 'Can ICT solution improve the state-of-the-art ?' the answer could be: 'Yes it can be an useful support for a skilled professional in the field of autis
    corecore