1,153 research outputs found

    Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

    Get PDF
    The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures

    Applications of Affective Computing in Human-Robot Interaction: state-of-art and challenges for manufacturing

    Get PDF
    The introduction of collaborative robots aims to make production more flexible, promoting a greater interaction between humans and robots also from physical point of view. However, working closely with a robot may lead to the creation of stressful situations for the operator, which can negatively affect task performance. In Human-Robot Interaction (HRI), robots are expected to be socially intelligent, i.e., capable of understanding and reacting accordingly to human social and affective clues. This ability can be exploited implementing affective computing, which concerns the development of systems able to recognize, interpret, process, and simulate human affects. Social intelligence is essential for robots to establish a natural interaction with people in several contexts, including the manufacturing sector with the emergence of Industry 5.0. In order to take full advantage of the human-robot collaboration, the robotic system should be able to perceive the psycho-emotional and mental state of the operator through different sensing modalities (e.g., facial expressions, body language, voice, or physiological signals) and to adapt its behaviour accordingly. The development of socially intelligent collaborative robots in the manufacturing sector can lead to a symbiotic human-robot collaboration, arising several research challenges that still need to be addressed. The goals of this paper are the following: (i) providing an overview of affective computing implementation in HRI; (ii) analyzing the state-of-art on this topic in different application contexts (e.g., healthcare, service applications, and manufacturing); (iii) highlighting research challenges for the manufacturing sector

    Footwear bio-modelling: An industrial approach

    Get PDF
    There is a growing need within the footwear sector to customise the design of the last from which a specific footwear style is to be produced. This customisation is necessary for user comfort and health reasons, as the user needs to wear a suitable shoe. For this purpose, a relationship must be established between the user foot and the last with which the style will be made; up until now, no model has existed that integrates both elements. On the one hand, traditional customised footwear manufacturing techniques are based on purely artisanal procedures which make the process arduous and complex; on the other hand, geometric models proposed by different authors present the impossibility of implementing them in an industrial environment with limited resources for the acquisition of morphometric and structural data for the foot, apart from the fact that they do not prove to be sufficiently accurate given the non-similarity of the foot and last. In this paper, two interrelated geometric models are defined, the first, a bio-deformable foot model and the second, a deformable last model. The experiments completed show the goodness of the model, with it obtaining satisfactory results in terms of comfort, efficiency and precision, which make it viable for use in the sector

    Design and management of brand identity with an action research in Turkish fashion industry

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Industrial Design, Izmir, 2004Includes bibliographical references (leaves: 113)Text in English; Abstract: Turkish and Englishxiii, 134 leavesThis thesis studies design and managementof brand identity, considering emotion concepts. It is an action research thesis, which focused on achieving answers to two main questions: 1. How is brand identity designed, in order to create an emotional experience? 2. How are emotions directed to brand identity are measured, and thus provide a base for management of brand identity?(based on the widely accepted business argument that .nothing can be managed without measuring.Today, it is widely accepted that marketing and customers have evolved. The perceptions of brands and products have shifted from functionality or usability aspects to experiential and emotional aspects. Customers want to be excited about a product, surprised about a service, or desiring about a brand. As a result; design of brand identity elements (as well as products) needed to be reviewed and emphasized, and new metrics are needed; for this era. In other words; new research and measurement methods is needed for the new era.As this thesis. research method is action research, with dual aims both for action and research, the action part of this research is carried out with Jimmy Key© brand (brand name changed to JKEY© at the end of the action research, in 2004), which is a Turkish fashion brand that sells sports and casual wear to a young and educatedaudience in national and international markets.Researches in this work aimed to increase understanding of the theories and concepts related with brand identity and emotion. Literature review starts with an explanation of the evolution of marketing concepts from 1900.s; where the idea in action was mass consumption, to 21st century; where the experience economy dominates. Mood consumption is offered as a new marketing perspective for this age.The concepts reviewed in this thesis that are related with emotional design of brand are .Brand identity., .Brand Positioning., .Emotions., .Emotional Experiences. and .Moods., .Fashion. (from a social and economic view), .Personalization andCustomization. and .Emotional Branding.. The emotional instruments in brand research are psycho physiological instruments; facial expressions, voice and self report measures. A remarkable tool that uses facial, vocal and bodily expressions elicited by design of products is PrEmo tool of Desmet.Actions of this work focused on reviewing the current literature with an emotional consideration, and to brought about change in Jimmy Key© brand identity with various designs, and proposed consumer research methods and tools for measuring, and thus for managing, brand identity, in which the concepts of emotion could be taken into consideration. The proposed designs (including logo, naming, advertisements, tags,bags, packaging, catalog, shop windows.etc.) and the proposed brand measurement instruments (including questionnaires, focus groups, brand collages, shopping tests and finally the proposed instrument .MyBetie.) constitute the action part of this action research.All the proposed designs in this work are considered as physiological,sociological, psychological, ideological experiences of the brand identity similar to Jordan.s four-pleasure framework.Among the entire brand researches in action were very time consuming and high cost; while they were very crucial and beneficial for brand positioning and future tracking. So a need for a more effective instrument, which is also capable of measuring emotions, is realized. Thus, four brand research methods used in action were integrated into a new proposed instrument. And being a unique; personalized, cost effective, quick response brand research and measurement instrument; .MyBetie. is proposed at the end of the thesis

    Yüz anotomisine dayalı ifade tanıma

    Get PDF
    Literatürde sunulan geometriye dayalı yüz ifadesi tanıma algoritmaları çoğunlukla araştırmacılar tarafından seçilen nirengi noktalarının devinimlerine veya yüz ifadesi kodlama sistemi (FACS) tarafından tanımlanan eylem birimlerinin etkinlik derecelerine odaklanır. Her iki yaklaşımda da nirengi noktaları, ifadenin en yoğun gözlemlendiği dudak, burun kenarları ve alın üzerinde konumlandırılır. Farklı kas etkinlikleri, birden fazla kasın etki alanında bulunan bu nirengi noktaları üzerinde benzer devinimlere neden olurlar. Bu nedenle, karmaşık ifadelerin belli noktalara konulan, sınırlı sayıdaki nirengi ile analizi oldukça zordur. Bu projede, yüz üzerinde kas etkinlik alanlarına dağıtılmış çok sayıda nirengi nokta-sının yüz ifadesinin oluşturulması sürecinde izlenmesi ile kas etkinlik derecelerinin belirlenmesini önerdik. Önerdiğimiz yüz ifadesi tanıma algoritması altı aşama içerir; (1) yüz modelinin deneğin yüzüne uyarlanması, (2) herhangi bir kasın etki alanında bulunan tüm nirengi noktalarının imge dizisinin ardışık çerçevelerinde izlenmesi, (3) baş yöneliminin belirlenmesi ve yüz modelinin imge üzerinde gözlemlenen yüz ile hizalanması, (4) yüze ait nirengi noktalarının deviniminden yola çıkarak model düğümlerinin yeni koordinatlarının kestirimi, (5) düğüm devinimlerinin kas kuvvetleri için çözülmesi, ve (6) elde edilen kas kuvvetleri ile yüz ifadesi sınıflandırılmasının yapılması. Algoritmamız, modelin yüze uyarlanması aşamasında yüz imgesi üzerinde nirengi noktalarının seçilmesi haricinde tamamen otomatiktir. Kas etkinliğine dayalı bu öznitelikleri temel ve belirsiz ifadelerin sınıflandırılması problemlerinde sınadık. Yedi adet temel yüz ifadesi üzerinde SVM sınıflandırıcısı ile %76 oranında başarı elde ettik. Bu oran, insanların ifade tanımadaki yetkinliklerine yakındır. Yedi temel ifadenin belirsiz gözlemlendiği çerçevelerde en yüksek başarıyı yine SVM sınıflandırıcısı ile %55 olarak elde ettik. Bu başarım, kas kuvvetlerinin genellikle hafif ve ani görülen istemsiz ifadelerin seziminde de başarılı olabileceğini göstermektedir. Kas kuvvetleri, yüz ifadesinin oluşturulmasındaki temel fiziksel gerçekliği yansıtan özniteliklerdir. Kas etkinliklerinin hassasiyetle kestirimi, belirsiz ifade değişikliklerinin sezimini sağladığı gibi, karmaşık yüz ifadelerinin sınıflandırılmalarını kolaylaştıracaktır. Ek olarak, araştırmacılar veya uzmanlar tarafından seçilen nirengi devinimleri ile kısıtlı kal-mayan bu yaklaşım, duygular ve yüz ifadeleri arasında bilinmeyen bağıntıların ortaya çıkarılmasını sağlayabilecektir.The geometric approaches to facial expression recognition commonly focus on the displa-cement of feature points that are selected by the researchers or the action units that aredefined by the facial action coding system (FACS). In both approaches the feature pointsare carefully located on lips, nose and the forehead, where an expression is observed at itsfull strength. Since these regions are under the influence of multiple muscles, distinct mus-cular activities could result in similar displacements of the feature points. Hence, analysisof complex expressions through a set of specific feature points is quite difficult.In this project we propose to extract the facial muscle activity levels through multiplepoints distributed over the muscular regions of influence. The proposed algorithm consistsof; (1) semi–automatic customization of the face model to a subject, (2) identification andtracking of facial features that reside in the region of influence of a muscle, (3) estimationof head orientation and alignment of the face model with the observed face, (4) estima-tion of relative displacements of vertices that produce facial expressions, (5) solving vertexdisplacements to obtain muscle forces, and (6) classification of facial expression with themuscle force features. Our algorithm requires manual intervention only in the stage ofmodel customization.We demonstrate the representative power of the proposed muscle–based features onclassification problems of seven basic and subtle expressions. The best performance onthe classification problem of basic expressions was 76%, obtained by use of SVM. Thisresult is close to the performance of humans in facial expression recognition. Our bestperformance for classification of seven subtle expressions was %55, once again by use ofSVM. This figure implies that muscle–based features are good candidates for involuntaryexpressions, which are often subtle and instantaneous.Muscle forces can be considered as the ultimate base functions that anatomicallycompose all expressions. Increased reliability in extraction of muscle forces will enabledetection and classification of subtle and complex expressions with higher precision. Mo-reover, the proposed algorithm may be used to reveal unknown mechanisms of emotionsand expressions as it is not limited to a predefined set of heuristic features.TÜBİTA

    Data Leakage and Evaluation Issues in Micro-Expression Analysis

    Full text link
    Micro-expressions have drawn increasing interest lately due to various potential applications. The task is, however, difficult as it incorporates many challenges from the fields of computer vision, machine learning and emotional sciences. Due to the spontaneous and subtle characteristics of micro-expressions, the available training and testing data are limited, which make evaluation complex. We show that data leakage and fragmented evaluation protocols are issues among the micro-expression literature. We find that fixing data leaks can drastically reduce model performance, in some cases even making the models perform similarly to a random classifier. To this end, we go through common pitfalls, propose a new standardized evaluation protocol using facial action units with over 2000 micro-expression samples, and provide an open source library that implements the evaluation protocols in a standardized manner. Code will be available in \url{https://github.com/tvaranka/meb}

    Creative tools for producing realistic 3D facial expressions and animation

    Get PDF
    Creative exploration of realistic 3D facial animation is a popular but very challenging task due to the high level knowledge and skills required. This forms a barrier for creative individuals who have limited technical skills but wish to explore their creativity in this area. This paper proposes a new technique that facilitates users’ creative exploration by hiding the technical complexities of producing facial expressions and animation. The proposed technique draws on research from psychology, anatomy and employs Autodesk Maya as a use case by developing a creative tool, which extends Maya’s Blend Shape Editor. User testing revealed that novice users in the creative media, employing the proposed tool can produce rich and realistic facial expressions that portray new interesting emotions. It reduced production time by 25% when compared to Maya and by 40% when compared to 3DS Max equivalent tools

    Efficient Synthesis of Realistic Facial Animation

    Get PDF
    制度:新 ; 報告番号:甲3556号 ; 学位の種類:博士(工学) ; 授与年月日:2012/2/25 ; 早大学位記番号:新589

    Deep Learning Optimizers Comparison in Facial Expression Recognition

    Get PDF
    Artificial Intelligence is everywhere we go, whether it is programming an interactive cleaning robot or detecting a bank fraud. Its rise is inevitable. In the last few decades, many new architectures and approaches were brought up, so it becomes hard to know what is the best approach or architecture for a certain area. One of such areas is the detection of emotion in the human face, most commonly known by Facial Expression Recognition (or FER). In this work we started by doing an intensive collection of data concerning the theories that explain the existence of emotions, how they are distinguished from one another, and how they are recognized in a human face. After this, we started to develop deep learning models with different architectures as to compare their performances when used for Facial Expression Recognition. After developing the models, we took one of them and tested it with different deep learning optimizer algorithms, as to verify the difference among them, thus figuring out the best optimizing algorithm for this particular case.A Inteligência Artifical encontra-se presente em todo o lado, quer seja a programar um robô de limpeza interativo ou a detetar uma fraude bancária. A sua ascensão é inevitável. Nas últimas décadas, foram criadas inúmeras novas arquiteturas e abordagens e, por isso, torna-se difícil saber qual a melhor abordagem ou arquitetura para uma certa área. Uma dessas áreas é a deteção de emoção na cara humana, também conhecida como Reconhecimento de Expressão Facial. Neste trabalho começámos por realizar uma coleta intensiva de dados acerca das teorias que explicam a existência de emoções, como as mesmas são distinguidas umas das outras e como podem ser identificadas numa cara humana. Posteriormente, começámos a desenvolver modelos de deep learning com diferentes arquiteturas para comparar os respetivos desempenhos quando usadas em Reconhecimento de Expressão Facial. Após desenvolver os modelos, pegámos num dos mesmos e testámo-lo com diferentes algoritmos de otimização deep learning de forma a verificar quais as diferenças entre os mesmos, percebendo assim qual o mais indicado para uso neste caso em particular
    corecore