2,135 research outputs found

    Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging

    Full text link
    Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings

    Face Recognition Under Varying Illumination

    Get PDF
    This study is a result of a successful joint-venture with my adviser Prof. Dr. Muhittin Gökmen. I am thankful to him for his continuous assistance on preparing this project. Special thanks to the assistants of the Computer Vision Laboratory for their steady support and help in many topics related with the project

    Verification of emotion recognition from facial expression

    Get PDF
    Analysis of facial expressions is an active topic of research with many potential applications, since the human face plays a significant role in conveying a person’s mental state. Due to the practical values it brings, scientists and researchers from different fields such as psychology, finance, marketing, and engineering have developed significant interest in this area. Hence, there are more of a need than ever for the intelligent tool to be employed in the emotional Human-Computer Interface (HCI) by analyzing facial expressions as a better alternative to the traditional devices such as the keyboard and mouse. The face is a window of human mind. The examination of mental states explores the human’s internal cognitive states. A facial emotion recognition system has a potential to read people’s minds and interpret the emotional thoughts to the world. High rates of recognition accuracy of facial emotions by intelligent machines have been achieved in existing efforts based on the benchmarked databases containing posed facial emotions. However, they are not qualified to interpret the human’s true feelings even if they are recognized. The difference between posed facial emotions and spontaneous ones has been identified and studied in the literature. One of the most interesting challenges in the field of HCI is to make computers more human-like for more intelligent user interfaces. In this dissertation, a Regional Hidden Markov Model (RHMM) based facial emotion recognition system is proposed. In this system, the facial features are extracted from three face regions: the eyebrows, eyes and mouth. These regions convey relevant information regarding facial emotions. As a marked departure from prior work, RHMMs for the states of these three distinct face regions instead of the entire face for each facial emotion type are trained. In the recognition step, regional features are extracted from test video sequences. These features are processed according to the corresponding RHMMs to learn the probabilities for the states of the three face regions. The combination of states is utilized to identify the estimated emotion type of a given frame in a video sequence. An experimental framework is established to validate the results of such a system. RHMM as a new classifier emphasizes the states of three facial regions, rather than the entire face. The dissertation proposes the method of forming observation sequences that represent the changes of states of facial regions for training RHMMs and recognition. The proposed method is applicable to the various forms of video clips, including real-time videos. The proposed system shows the human-like capability to infer people’s mental states from moderate level of facial spontaneous emotions conveyed in the daily life in contrast to posed facial emotions. Moreover, the extended research work associated with the proposed facial emotion recognition system is forwarded into the domain of finance and biomedical engineering, respectively. CEO’s fear facial emotion has been found as the strong and positive predictor to forecast the firm stock price in the market. In addition, the experiment results also have demonstrated the similarity of the spontaneous facial reactions to stimuli and inner affective states translated by brain activity. The results revealed the effectiveness of facial features combined with the features extracted from the signals of brain activity for multiple signals correlation analysis and affective state classification

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance

    Radical Recognition in Off-Line Handwritten Chinese Characters Using Non-Negative Matrix Factorization

    Get PDF
    In the past decade, handwritten Chinese character recognition has received renewed interest with the emergence of touch screen devices. Other popular applications include on-line Chinese character dictionary look-up and visual translation in mobile phone applications. Due to the complex structure of Chinese characters, this classification task is not exactly an easy one, as it involves knowledge from mathematics, computer science, and linguistics. Given a large image database of handwritten character data, the goal of my senior project is to use Non-Negative Matrix Factorization (NMF), a recent method for finding a suitable representation (parts-based representation) of image data, to detect specific sub-components in Chinese characters. NMF has only been applied to typed (printed) Chinese characters in different fonts. This project focuses specifically on how well NMF works on handwritten characters. In addition, research in Chinese character classification has mainly been done using holistic approaches - treating each character as an inseparable unit. By using NMF, this project takes a different approach by focusing on a more specific problem in Chinese character classification: radical (sub-component) detection. Finally, a possible application of radical detection will be proposed. This interactive application can potentially help Chinese language learners better recognize characters by radicals
    corecore