20 research outputs found

    Simple MoCap System for Home Usage

    Get PDF
    Nowadays many MoCap systems exist. Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In general it is a professional solution that is sophisticated and costly. This paper presents a solution with a system that is inexpensive. We propose a new easy-to-use system for home usage, through which we are making character animation. In its implementation we paid attention to the elimination of errors from the previous solutions. In this paper the authors describe the method how motion capture characters on a treadmill and as well as an own Java application that processes the video for its further use in Cinema 4D. This paper describes the implementation of this technology of sensing in a way so that the animated character authentically imitated human movement on a treadmill

    Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Get PDF
    Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process) is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression) as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK). In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK) from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7) facial expressions, namely anger, disgust, fear, happiness, sadness, surprise and neutral. We do not want to point only to the percentage of success of our solution. We want to point out the way we have determined this measurements and the results we have achieved and how these results have significantly influenced our future research direction

    Are Instructed Emotional States Suitable for Classification? Demonstration of How They Can Significantly Influence the Classification Result in An Automated Recognition System

    Get PDF
    At the present time, various freely available or commercial solutions are used to classify the subject's emotional state. Classification of the emotional state helps us to understand how the subject feels and what he is experiencing in a particular situation. Classification of the emotional state can thus be used in various areas of our life from neuromarketing, through the automotive industry (determining how emotions affect driving), to implementing such a system into the learning process. The learning process, which is the (mutual) interaction between the teacher and the learner, is an interesting area in which individual emotional states can be explored. In this pedagogical-psychological area several research studies were realized. These studies in some cases demonstrated the important impact of the emotional state on the results of the students. However, for comparison and unambiguous classification of the emotional state most of these studies used the instructed (even constructed) stereotypical facial expressions of the most well-known test databases (Jaffe is a typical example). Such facial expressions are highly standardized, and the software can recognize them with a fairly big percentage, but this does not necessarily point to the actual success rate of the subject's emotional classification in such a test because the similarity to real emotional expression remains unknown. Therefore, we examined facial expressions in real situations. We have subsequently compared these examined facial expressions with the instructed expressions of the same emotions (the Jaffe database). The overall average classification score in real facial expressions was 94.58%

    Evaluating the Emotional State of a User Using a Webcam

    Get PDF
    In online learning is more difficult for teachers identify to see how individual students behave. Student’s emotions like self-esteem, motivation, commitment, and others that are believed to be determinant in student’s performance can not be ignored, as they are known (affective states and also learning styles) to greatly influence student’s learning. The ability of the computer to evaluate the emotional state of the user is getting bigger attention. By evaluating the emotional state, there is an attempt to overcome the barrier between man and non-emotional machine. Recognition of a real time emotion in e-learning by using webcams is research area in the last decade. Improving learning through webcams and microphones offers relevant feedback based upon learner’s facial expressions and verbalizations. The majority of current software does not work in real time – scans face and progressively evaluates its features. The designed software works by the use neural networks in real time which enable to apply the software into various fields of our lives and thus actively influence its quality. Validation of face emotion recognition software was annotated by using various experts. These expert findings were contrasted with the software results. An overall accuracy of our software based on the requested emotions and the recognized emotions is 78%. Online evaluation of emotions is an appropriate technology for enhancing the quality and efficacy of e-learning by including the learner´s emotional states

    Implementation of Innovative Technologies in the Fields of Electronic Locks

    Get PDF
    Almost every institution currently uses attendance system that ensures maintaining control over the attendance of employees, students and other persons. By using attendance system we can provide the right to enter certain rooms for only designated people. On the basis of reports from attendance system we can evaluate a monthly attendance of employees and the by that determine their real movement within the institution. Today is this system the usual standard in every medium and large institution, for example businesses, schools, universities and many others. The price for such a system, however, is often too high. Therefore companies opt also for other alternatives. Our task was to create a working prototype of such a system. Such a system must dispose at least with function for indicating the arrival and departure of employees to be able to determine the time of stay in the workplace. For this purpose we used the platform of microcontroller Arduino with a several basic sensors and software Arduino IDE. In this paper we present the achieved results in terms of applying different access cards

    Biometrics Authentication of Fingerprint with Using Fingerprint Reader and Microcontroller Arduino

    Get PDF
    The idea of security is as old as humanity itself. Between oldest methods of security were included simple mechanical locks whose authentication element was the key. At first, a universal–simple type, later unique for each lock. A long time had mechanical locks been the sole option for protection against unauthorized access. The boom of biometrics has come in the 20th century, and especially in recent years, biometrics is much expanded in the various areas of our life. Opposite of traditional security methods such as passwords, access cards, and hardware keys, it offers many benefits. The main benefits are the uniqueness and the impossibility of their loss. The main benefits are the uniqueness and the impossibility of their loss. Therefore we focussed in this paper on the the design of low cost biometric fingerprint system and subsequent implementation of this system in praxtise. Our main goal was to create a system that is capable of recognizing fingerprints from a user and then processing them. The main part of this system is the microcontroller Arduino Yun with an external interface to the scan of the fingerprint with a name Adafruit R305 (special reader). This microcontroller communicates with the external database, which ensures the exchange of data between Arduino Yun and user application. This application was created for (currently) most widespread mobile operating system-Android

    The Possibilities of Classification of Emotional States Based on User Behavioral Characteristics

    Get PDF
    The classification of user's emotions based on their behavioral characteristic, namely their keyboard typing and mouse usage pattern is an effective and non-invasive way of gathering user's data without imposing any limitations on their ability to perform tasks. To gather data for the classifier we used an application, the Emotnizer, which we had developed for this purpose. The output of the classification is categorized into 4 emotional categories from Russel's complex circular model - happiness, anger, sadness and the state of relaxation. The sample of the reference database consisted of 50 students. Multiple regression analyses gave us a model, that allowed us to predict the valence and arousal of the subject based on the input from the keyboard and mouse. Upon re-testing with another test group of 50 students and processing the data we found out our Emotnizer program can classify emotional states with an average success rate of 82.31%

    Voice Analysis Using PRAAT Software and Classification of User Emotional State

    Get PDF
    During the last decades the field of IT has seen an incredible and very rapid development. This development has shown that it is important not only to shift performance and functional boundaries but also to adapt the way human-computer interaction to modern needs. One of the interaction possibilities is a voice control which nowadays can‘t be restricted only to direct commands. The goal of adaptive interaction between man and computer is the human needs understanding. The paper deals with the user's emotional state classification based on the voice track analysis, it describes its own solution - the measurement and the selection process of appropriate voice characteristics using ANOVA analysis and the use of PRAAT software for many voice aspects analysis and for the implementation of own application to classify the user's emotional state from his/her voice. In the paper are presented the results of the created application testing and the possibilities of further expansion and improvement of this solution

    Simple MoCap System for Home Usage

    No full text
    Nowadays many MoCap systems exist. Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In general it is a professional solution that is sophisticated and costly. This paper presents a solution with a system that is inexpensive. We propose a new easy-to-use system for home usage, through which we are making character animation. In its implementation we paid attention to the elimination of errors from the previous solutions. In this paper the authors describe the method how motion capture characters on a treadmill and as well as an own Java application that processes the video for its further use in Cinema 4D. This paper describes the implementation of this technology of sensing in a way so that the animated character authentically imitated human movement on a treadmill
    corecore