744 research outputs found

    Real-time Emotional State Detection from Facial Expression on Embedded Devices

    Get PDF
    From the last decade, researches on human facial emotion recognition disclosed that computing models built on regression modelling can produce applicable performance. However, many systems need extensive computing power to be run that prevents its wide applications such as robots and smart devices. In this proposed system, a real-time automatic facial expression system was designed, implemented and tested on an embedded device such as FPGA that can be a first step for a specific facial expression recognition chip for a social robot. The system was built and simulated in MATLAB and then was built on FPGA and it can carry out real time continuously emotional state recognition at 30 fps with 47.44% accuracy. The proposed graphic user interface is able to display the participant video and two dimensional predict labels of the emotion in real time together.The research presented in this paper was supported partially by the Slovak Research and Development Agency under the research projects APVV-15-0517 & APPV-15-0731 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the project VEGA 1/0075/15

    Design Methodology for Face Detection Acceleration

    Get PDF
    A design methodology to accelerate the face detection for embedded systems is described, starting from high level (algorithm optimization) and ending with low level (software and hardware codesign) by addressing the issues and the design decisions made at each level based on the performance measurements and system limitations. The implemented embedded face detection system consumes very little power compared with the traditional PC software implementations while maintaining the same detection accuracy. The proposed face detection acceleration methodology is suitable for real time applications.Ministerio español de Ciencia y Tecnología TEC2011-24319Junta de Andalucía FEDER P08-TIC-0367

    A COMPUTATION METHOD/FRAMEWORK FOR HIGH LEVEL VIDEO CONTENT ANALYSIS AND SEGMENTATION USING AFFECTIVE LEVEL INFORMATION

    No full text
    VIDEO segmentation facilitates e±cient video indexing and navigation in large digital video archives. It is an important process in a content-based video indexing and retrieval (CBVIR) system. Many automated solutions performed seg- mentation by utilizing information about the \facts" of the video. These \facts" come in the form of labels that describe the objects which are captured by the cam- era. This type of solutions was able to achieve good and consistent results for some video genres such as news programs and informational presentations. The content format of this type of videos is generally quite standard, and automated solutions were designed to follow these format rules. For example in [1], the presence of news anchor persons was used as a cue to determine the start and end of a meaningful news segment. The same cannot be said for video genres such as movies and feature films. This is because makers of this type of videos utilized different filming techniques to design their videos in order to elicit certain affective response from their targeted audience. Humans usually perform manual video segmentation by trying to relate changes in time and locale to discontinuities in meaning [2]. As a result, viewers usually have doubts about the boundary locations of a meaningful video segment due to their different affective responses. This thesis presents an entirely new view to the problem of high level video segmentation. We developed a novel probabilistic method for affective level video content analysis and segmentation. Our method had two stages. In the first stage, a®ective content labels were assigned to video shots by means of a dynamic bayesian 0. Abstract 3 network (DBN). A novel hierarchical-coupled dynamic bayesian network (HCDBN) topology was proposed for this stage. The topology was based on the pleasure- arousal-dominance (P-A-D) model of a®ect representation [3]. In principle, this model can represent a large number of emotions. In the second stage, the visual, audio and a®ective information of the video was used to compute a statistical feature vector to represent the content of each shot. Affective level video segmentation was achieved by applying spectral clustering to the feature vectors. We evaluated the first stage of our proposal by comparing its emotion detec- tion ability with all the existing works which are related to the field of a®ective video content analysis. To evaluate the second stage, we used the time adaptive clustering (TAC) algorithm as our performance benchmark. The TAC algorithm was the best high level video segmentation method [2]. However, it is a very computationally intensive algorithm. To accelerate its computation speed, we developed a modified TAC (modTAC) algorithm which was designed to be mapped easily onto a field programmable gate array (FPGA) device. Both the TAC and modTAC algorithms were used as performance benchmarks for our proposed method. Since affective video content is a perceptual concept, the segmentation per- formance and human agreement rates were used as our evaluation criteria. To obtain our ground truth data and viewer agreement rates, a pilot panel study which was based on the work of Gross et al. [4] was conducted. Experiment results will show the feasibility of our proposed method. For the first stage of our proposal, our experiment results will show that an average improvement of as high as 38% was achieved over previous works. As for the second stage, an improvement of as high as 37% was achieved over the TAC algorithm

    Identification of Autism Disorder Spectrum Based on Facial Expressions

    Get PDF
    The study of emotions has always been a matter for philosophers and psychologists. The purpose of this research is to identify and categorize children with autism disorder using diagnosis of the faces emotion in eight states (neutral, anger, ridicule, hate, fear, happiness, sadness and wonder). The method of this research is descriptive-analytic. To collect samples from 80 children with autism disorder, we take images in each of the 8 cases. MATLAB software has been used to analyse photos accurately. The results show that children who are in first group according to DSM-5, they have at least six mode of the listed eight modes. Children who are in the second category of DSM-5, they experience between three to five feelings. It is also observed that children who are in the third category according to DSM-5, can experience only one to two feelings

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Comprehensive Survey: Biometric User Authentication Application, Evaluation, and Discussion

    Full text link
    This paper conducts an extensive review of biometric user authentication literature, addressing three primary research questions: (1) commonly used biometric traits and their suitability for specific applications, (2) performance factors such as security, convenience, and robustness, and potential countermeasures against cyberattacks, and (3) factors affecting biometric system accuracy and po-tential improvements. Our analysis delves into physiological and behavioral traits, exploring their pros and cons. We discuss factors influencing biometric system effectiveness and highlight areas for enhancement. Our study differs from previous surveys by extensively examining biometric traits, exploring various application domains, and analyzing measures to mitigate cyberattacks. This paper aims to inform researchers and practitioners about the biometric authentication landscape and guide future advancements

    Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review

    Get PDF
    Background: Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. Objective: This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. Methods: We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. Results: The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. Conclusions: Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.This research has been partially funded by the Spanish project “Advanced Computing Architectures and Machine Learning-Based Solutions for Complex Problems in Bioinformatics, Biotechnology, and Biomedicine (RTI2018-101674-B-I00)” and the Andalusian project “Integration of heterogeneous biomedical information sources by means of high performance computing. Application to personalized and precision medicine (P20_00163)”. Funding for this research is provided by the EU Horizon 2020 Pharaon project ‘Pilots for Healthy and Active Ageing’ (no. 857188). Moreover, this research has received funding under the REMIND project Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020 (no. 734355). This research has been partially funded by the BALLADEER project (PROMETEO/2021/088) from the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana. Furthermore, it has been partially funded by the AETHER-UA (PID2020-112540RB-C43) project from the Spanish Ministry of Science and Innovation. This work has been also partially funded by “La Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital”, under the project “Development of an architecture based on machine learning and data mining techniques for the prediction of indicators in the diagnosis and intervention of autism spectrum disorder. AICO/2020/117”. This study was also funded by the Colombian Government through Minciencias grant number 860 “international studies for doctorate”. This research has been partially funded by the Spanish Government by the project PID2021-127275OB-I00, FEDER “Una manera de hacer Europa”. Moreover, this contribution has been supported by the Spanish Institute of Health ISCIII through the DTS21-00047 project. Furthermore, this work was funded by COST Actions “HARMONISATION” (CA20122) and “A Comprehensive Network Against Brain Cancer” (Net4Brain - CA22103). Sandra Amador is granted by the Generalitat Valenciana and the European Social Fund (CIACIF/ 2022/233)

    Quarterly Research Output Reports

    Get PDF
    These reports paper summarize research outputs published in each quarter by academic staff at the University of Lincoln. The lists include substantive research outputs first appearing "in published form" (or equivalent for non-textual outputs) during this period. The lists have been generated automatically from data stored in the Lincoln Repository (http://eprints.lincoln.ac.uk/). Tables summarize the volume of outputs recorded by School

    A review on Video Classification with Methods, Findings, Performance, Challenges, Limitations and Future Work

    Get PDF
    In recent years, there has been a rapid development in web users and sufficient bandwidth. Internet connectivity, which is so low cost, makes the sharing of information (text, audio, and videos) more common and faster. This video content needs to be analyzed for prediction it classes in different purpose for the users. Many machines learning approach has been developed for the classification of video to save people time and energy. There are a lot of existing review papers on video classification, but they have some limitations such as limitation of the analysis, badly structured, not mention research gaps or findings, not clearly describe advantages, disadvantages, and future work. But our review paper almost overcomes these limitations. This study attempts to review existing video-classification procedures and to examine the existing methods of video-classification comparatively and critically and to recommend the most effective and productive process. First of all, our analysis examines the classification of videos with taxonomical details, the latest application, process, and datasets information. Secondly, overall inconvenience, difficulties, shortcomings and potential work, data, performance measurements with the related recent relation in science, deep learning, and the model of machine learning. Study on video classification systems using their tools, benefits, drawbacks, as well as other features to compare the techniques they have used also constitutes a key task of this review. Lastly, we also present a quick summary table based on selected features. In terms of precision and independence extraction functions, the RNN (Recurrent Neural Network), CNN (Convolutional Neural Network) and combination approach performs better than the CNN dependent method
    corecore