2,317 research outputs found

    Measuring Human Comprehension from Nonverbal Behaviour using Artificial Neural Networks

    Get PDF
    This paper presents the adaptation and application of Silent Talker, a psychological profiling system in the measurement of human comprehension through the monitoring of multiple channels of facial nonverbal behaviour using Artificial Neural Networks (ANN). Everyday human interactions are abundant with almost unconscious nonverbal behaviours accounting for approximately 93% of communication, providing a potentially rich source of information once decoded. Existing comprehension assessments techniques are inhibited by inconsistencies, limited to the verbal communication dimension and are often time-consuming with feedback delay. Major weaknesses hinder humans as accurate decoders of nonverbal behaviour with being error prone, inconsistent and poor at simultaneously focusing on multiple channels. Furthermore, human decoders are susceptible to fatigue and require training resulting in a costly, time-consuming process. ANNs are powerful, adaptable, scalable computational models that are able to overcome human decoder and pattern classification weaknesses. Therefore, the neural networks computer-based Silent Talker system has been trained and validated in the measurement of human comprehension using videotaped participant nonverbal behaviour from an informed consent field study. A series of experiments on training backpropagation ANNs with different topologies were conducted. The results show that comprehension and non comprehension patterns exist within the monitored multichannels of facial NVB with both experiments consistently yielding classification accuracies above 80%

    Detecting human comprehension from nonverbal behaviour using artificial neural networks

    Get PDF
    Every day, communication between humans is abundant with an array of nonverbal behaviours. Nonverbal behaviours are signals emitted without using words such as facial expressions, eye gaze and body movement. Nonverbal behaviours have been used to identify a person’s emotional state in previous research. With nonverbal behaviour being continuously available and almost unconscious, it provides a potentially rich source of knowledge once decoded. Humans are weak decoders of nonverbal behaviour due to being error prone, susceptible to fatigue and poor at simultaneously monitoring numerous nonverbal behaviours. Human comprehension is primarily assessed from written and spoken language. Existing comprehension assessments tools are inhibited by inconsistencies and are often time-consuming with feedback delay. Therefore, there is a niche for attempting to detect human comprehension from nonverbal behaviour using artificially intelligent computational models such as Artificial Neural Networks (ANN), which are inspired by the structure and behaviour of biological neural networks such as those found within the human brain. This Thesis presents a novel adaptable system known as FATHOM, which has been developed to detect human comprehension and non-comprehension from monitoring multiple nonverbal behaviours using ANNs. FATHOM’s Comprehension Classifier ANN was trained and validated on human comprehension detection using the errorbackpropagation learning algorithm and cross-validation in a series of experiments with nonverbal datasets extracted from two independent comprehension studies where each participant was digitally video recorded: (1) during a mock informed consent field study and (2) in a learning environment. The Comprehension Classifier ANN repeatedly achieved averaged testing classification accuracies (CA) above 84% in the first phase of the mock informed consent field study. In the learning environment study, the optimised Comprehension Classifier ANN achieved a 91.385% averaged testing CA. Overall, the findings revealed that human comprehension and noncomprehension patterns can be automatically detected from multiple nonverbal behaviours using ANNs

    FATHOM: A Neural Network-based Non-verbal Human Comprehension Detection System for Learning Environments

    Get PDF
    This paper presents the application of FATHOM, a computerised non-verbal comprehension detection system, to distinguish participant comprehension levels in an interactive tutorial. FATHOM detects high and low levels of human comprehension by concurrently tracking multiple non-verbal behaviours using artificial neural networks. Presently, human comprehension is predominantly monitored from written and spoken language. Therefore, a large niche exists for exploring human comprehension detection from a non-verbal behavioral perspective using artificially intelligent computational models such as neural networks. In this paper, FATHOM was applied to a video-recorded exploratory study containing a learning task designed to elicit high and low comprehension states from the learner. The learning task comprised of watching a video on termites, suitable for the general public and an interview led question and answer session. This paper describes how FATHOM’s comprehension classifier artificial neural network was trained and validated in comprehension detection using the standard backpropagation algorithm. The results show that high and low comprehension states can be detected from learner’s non-verbal behavioural cues with testing classification accuracies above 76%

    A hybrid model combining neural networks and decision tree for comprehension detection

    Get PDF
    The Artificial Neural Network is generally considered to be an effective classifier, but also a “Black Box” component whose internal behavior cannot be understood by human users. This lack of transparency forms a barrier to acceptance in high-stakes applications by the general public. This paper investigates the use of a hybrid model comprising multiple artificial neural networks with a final C4.5 decision tree classifier to investigate the potential of explaining the classification decision through production rules. Two large datasets collected from comprehension studies are used to investigate the value of the C4.5 decision tree as the overall comprehension classifier in terms of accuracy and decision transparency. Empirical trials show that higher accuracies are achieved through using a decision tree classifier, but the significant tree size questions the rule transparency to a human

    Near real-time comprehension classification with artificial neural networks: decoding e-Learner non-verbal behaviour

    Get PDF
    Comprehension is an important cognitive state for learning. Human tutors recognise comprehension and non-comprehension states by interpreting learner non-verbal behaviour (NVB). Experienced tutors adapt pedagogy, materials and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time assessment for e-learner comprehension of on-screen information could provide a powerful tool for both adaptation within intelligent e-learning platforms and appraisal of tutorial content for learning analytics. However, literature suggests that no existing method for automatic classification of learner comprehension by analysis of NVB can provide a practical solution in an e-learning, on-screen, context. This paper presents design, development and evaluation of COMPASS, a novel near real-time comprehension classification system for use in detecting learner comprehension of on-screen information during e-learning activities. COMPASS uses a novel descriptive analysis of learner behaviour, image processing techniques and artificial neural networks to model and classify authentic comprehension indicative non-verbal behaviour. This paper presents a study in which 44 undergraduate students answered on-screen multiple choice questions relating to computer programming. Using a front-facing USB web camera the behaviour of the learner is recorded during reading and appraisal of on-screen information. The resultant dataset of non-verbal behaviour and question-answer scores has been used to train artificial neural network (ANN) to classify comprehension and non-comprehension states in near real-time. The trained comprehension classifier achieved normalised classification accuracy of 75.8%

    GDPR Impact on Computational Intelligence Research

    Get PDF
    The General Data Protection Regulation (GDPR) will become a legal requirement for all organizations in Europe from 25th May 2018 which collect and process data. One of the major changes detailed in Article 22 of the GDPR includes the rights of an individual not to be subject to automated decisionmaking, which includes profiling, unless explicit consent is given. Individuals who are subject to such decision-making have the right to ask for an explanation on how the decision is reached and organizations must utilize appropriate mathematics and statistical procedures. All data collected, including research projects require a privacy by design approach as well as the data controller to complete a Data Protection Impact Assessment in addition to gaining ethical approval. This paper discusses the impact of the GDPR on research projects which contain elements of computational intelligence undertaken within a University or with an Academic Partner

    Modelling e-learner comprehension within a conversational intelligent tutoring system

    Get PDF
    Conversational Intelligent Tutoring Systems (CITS) are agent based e-learning systems which deliver tutorial content through discussion, asking and answering questions, identifying gaps in knowledge and providing feedback in natural language. Personalisation and adaptation for CITS are current research focuses in the field. Classroom studies have shown that experienced human tutors automatically, through experience, estimate a learner’s level of subject comprehension during interactions and modify lesson content, activities and pedagogy in response. This paper introduces Hendrix 2.0, a novel CITS capable of classifying e-learner comprehension in real-time from webcam images. Hendrix 2.0 integrates a novel image processing and machine learning algorithm, COMPASS, that rapidly detects a broad range of non-verbal behaviours, producing a time-series of comprehension estimates on a scale from -1.0 to +1.0. This paper reports an empirical study of comprehension classification accuracy, during which 51 students at Manchester Metropolitan University undertook conversational tutoring with Hendrix 2.0. The authors evaluate the accuracy of strong comprehension and strong non-comprehension classifications, during conversational questioning. The results show that the COMPASS comprehension classifier achieved normalised classification accuracy of 75%

    A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents

    Full text link
    Embodied Conversational Agents (ECA) take on different forms, including virtual avatars or physical agents, such as a humanoid robot. ECAs are often designed to produce nonverbal behaviour to complement or enhance its verbal communication. One form of nonverbal behaviour is co-speech gesturing, which involves movements that the agent makes with its arms and hands that is paired with verbal communication. Co-speech gestures for ECAs can be created using different generation methods, such as rule-based and data-driven processes. However, reports on gesture generation methods use a variety of evaluation measures, which hinders comparison. To address this, we conducted a systematic review on co-speech gesture generation methods for iconic, metaphoric, deictic or beat gestures, including their evaluation methods. We reviewed 22 studies that had an ECA with a human-like upper body that used co-speech gesturing in a social human-agent interaction, including a user study to evaluate its performance. We found most studies used a within-subject design and relied on a form of subjective evaluation, but lacked a systematic approach. Overall, methodological quality was low-to-moderate and few systematic conclusions could be drawn. We argue that the field requires rigorous and uniform tools for the evaluation of co-speech gesture systems. We have proposed recommendations for future empirical evaluation, including standardised phrases and test scenarios to test generative models. We have proposed a research checklist that can be used to report relevant information for the evaluation of generative models as well as to evaluate co-speech gesture use.Comment: 9 page
    • …
    corecore