98 research outputs found

    FATHOM: A Neural Network-based Non-verbal Human Comprehension Detection System for Learning Environments

    Get PDF
    This paper presents the application of FATHOM, a computerised non-verbal comprehension detection system, to distinguish participant comprehension levels in an interactive tutorial. FATHOM detects high and low levels of human comprehension by concurrently tracking multiple non-verbal behaviours using artificial neural networks. Presently, human comprehension is predominantly monitored from written and spoken language. Therefore, a large niche exists for exploring human comprehension detection from a non-verbal behavioral perspective using artificially intelligent computational models such as neural networks. In this paper, FATHOM was applied to a video-recorded exploratory study containing a learning task designed to elicit high and low comprehension states from the learner. The learning task comprised of watching a video on termites, suitable for the general public and an interview led question and answer session. This paper describes how FATHOM’s comprehension classifier artificial neural network was trained and validated in comprehension detection using the standard backpropagation algorithm. The results show that high and low comprehension states can be detected from learner’s non-verbal behavioural cues with testing classification accuracies above 76%

    Airport artificial intelligence can detect deception: or am I lying?

    Get PDF

    Measuring Human Comprehension from Nonverbal Behaviour using Artificial Neural Networks

    Get PDF
    This paper presents the adaptation and application of Silent Talker, a psychological profiling system in the measurement of human comprehension through the monitoring of multiple channels of facial nonverbal behaviour using Artificial Neural Networks (ANN). Everyday human interactions are abundant with almost unconscious nonverbal behaviours accounting for approximately 93% of communication, providing a potentially rich source of information once decoded. Existing comprehension assessments techniques are inhibited by inconsistencies, limited to the verbal communication dimension and are often time-consuming with feedback delay. Major weaknesses hinder humans as accurate decoders of nonverbal behaviour with being error prone, inconsistent and poor at simultaneously focusing on multiple channels. Furthermore, human decoders are susceptible to fatigue and require training resulting in a costly, time-consuming process. ANNs are powerful, adaptable, scalable computational models that are able to overcome human decoder and pattern classification weaknesses. Therefore, the neural networks computer-based Silent Talker system has been trained and validated in the measurement of human comprehension using videotaped participant nonverbal behaviour from an informed consent field study. A series of experiments on training backpropagation ANNs with different topologies were conducted. The results show that comprehension and non comprehension patterns exist within the monitored multichannels of facial NVB with both experiments consistently yielding classification accuracies above 80%

    Are developmental disorders like cases of adult brain damage? Implications from connectionist modelling

    Get PDF
    It is often assumed that similar domain-specific behavioural impairments found in cases of adult brain damage and developmental disorders correspond to similar underlying causes, and can serve as convergent evidence for the modular structure of the normal adult cognitive system. We argue that this correspondence is contingent on an unsupported assumption that atypical development can produce selective deficits while the rest of the system develops normally (Residual Normality), and that this assumption tends to bias data collection in the field. Based on a review of connectionist models of acquired and developmental disorders in the domains of reading and past tense, as well as on new simulations, we explore the computational viability of Residual Normality and the potential role of development in producing behavioural deficits. Simulations demonstrate that damage to a developmental model can produce very different effects depending on whether it occurs prior to or following the training process. Because developmental disorders typically involve damage prior to learning, we conclude that the developmental process is a key component of the explanation of endstate impairments in such disorders. Further simulations demonstrate that in simple connectionist learning systems, the assumption of Residual Normality is undermined by processes of compensation or alteration elsewhere in the system. We outline the precise computational conditions required for Residual Normality to hold in development, and suggest that in many cases it is an unlikely hypothesis. We conclude that in developmental disorders, inferences from behavioural deficits to underlying structure crucially depend on developmental conditions, and that the process of ontogenetic development cannot be ignored in constructing models of developmental disorders

    Detecting human comprehension from nonverbal behaviour using artificial neural networks

    Get PDF
    Every day, communication between humans is abundant with an array of nonverbal behaviours. Nonverbal behaviours are signals emitted without using words such as facial expressions, eye gaze and body movement. Nonverbal behaviours have been used to identify a person’s emotional state in previous research. With nonverbal behaviour being continuously available and almost unconscious, it provides a potentially rich source of knowledge once decoded. Humans are weak decoders of nonverbal behaviour due to being error prone, susceptible to fatigue and poor at simultaneously monitoring numerous nonverbal behaviours. Human comprehension is primarily assessed from written and spoken language. Existing comprehension assessments tools are inhibited by inconsistencies and are often time-consuming with feedback delay. Therefore, there is a niche for attempting to detect human comprehension from nonverbal behaviour using artificially intelligent computational models such as Artificial Neural Networks (ANN), which are inspired by the structure and behaviour of biological neural networks such as those found within the human brain. This Thesis presents a novel adaptable system known as FATHOM, which has been developed to detect human comprehension and non-comprehension from monitoring multiple nonverbal behaviours using ANNs. FATHOM’s Comprehension Classifier ANN was trained and validated on human comprehension detection using the errorbackpropagation learning algorithm and cross-validation in a series of experiments with nonverbal datasets extracted from two independent comprehension studies where each participant was digitally video recorded: (1) during a mock informed consent field study and (2) in a learning environment. The Comprehension Classifier ANN repeatedly achieved averaged testing classification accuracies (CA) above 84% in the first phase of the mock informed consent field study. In the learning environment study, the optimised Comprehension Classifier ANN achieved a 91.385% averaged testing CA. Overall, the findings revealed that human comprehension and noncomprehension patterns can be automatically detected from multiple nonverbal behaviours using ANNs

    Models of atypical development must also be models of normal development

    Get PDF
    Functional magnetic resonance imaging studies of developmental disorders and normal cognition that include children are becoming increasingly common and represent part of a newly expanding field of developmental cognitive neuroscience. These studies have illustrated the importance of the process of development in understanding brain mechanisms underlying cognition and including children ill the study of the etiology of developmental disorders

    A hybrid model combining neural networks and decision tree for comprehension detection

    Get PDF
    The Artificial Neural Network is generally considered to be an effective classifier, but also a “Black Box” component whose internal behavior cannot be understood by human users. This lack of transparency forms a barrier to acceptance in high-stakes applications by the general public. This paper investigates the use of a hybrid model comprising multiple artificial neural networks with a final C4.5 decision tree classifier to investigate the potential of explaining the classification decision through production rules. Two large datasets collected from comprehension studies are used to investigate the value of the C4.5 decision tree as the overall comprehension classifier in terms of accuracy and decision transparency. Empirical trials show that higher accuracies are achieved through using a decision tree classifier, but the significant tree size questions the rule transparency to a human

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    Supervised Learning Models to Predict Stock Direction Within Different Sectors in a Bull and Bear Market

    Get PDF
    Forecasting stock market price movement is a well researched and an alluring topic within the machine learning and financial realm. Supervised machine learning algorithms such as Random Forest (RF) and Support Vector Machines (SVM) have been used independently to gain insight on the market. With such volatility in the market the scope of this study will utilized the RF and SVM in a very volatility market to determine if these models will perform at a high level or outperform each other in both markets. This relative study is performed on 16 stocks in 4 different sectors over the bear market ”housing crash” of 2008 . The model utilized technical indicators as the respective parameters to assist in predicting the stock price movement when determining the performance of each model. Despite the No Free Lunch Theorem stating one model can not out perform another model, the study displayed higher accuracy for the RF model. Each model was evaluated using the confusion metrics to calculate the precision, recall, and F1 score
    corecore