5,858 research outputs found

    Designing Human-Centered Collective Intelligence

    Get PDF
    Human-Centered Collective Intelligence (HCCI) is an emergent research area that seeks to bring together major research areas like machine learning, statistical modeling, information retrieval, market research, and software engineering to address challenges pertaining to deriving intelligent insights and solutions through the collaboration of several intelligent sensors, devices and data sources. An archetypal contextual CI scenario might be concerned with deriving affect-driven intelligence through multimodal emotion detection sources in a bid to determine the likability of one movie trailer over another. On the other hand, the key tenets to designing robust and evolutionary software and infrastructure architecture models to address cross-cutting quality concerns is of keen interest in the “Cloud” age of today. Some of the key quality concerns of interest in CI scenarios span the gamut of security and privacy, scalability, performance, fault-tolerance, and reliability. I present recent advances in CI system design with a focus on highlighting optimal solutions for the aforementioned cross-cutting concerns. I also describe a number of design challenges and a framework that I have determined to be critical to designing CI systems. With inspiration from machine learning, computational advertising, ubiquitous computing, and sociable robotics, this literature incorporates theories and concepts from various viewpoints to empower the collective intelligence engine, ZOEI, to discover affective state and emotional intent across multiple mediums. The discerned affective state is used in recommender systems among others to support content personalization. I dive into the design of optimal architectures that allow humans and intelligent systems to work collectively to solve complex problems. I present an evaluation of various studies that leverage the ZOEI framework to design collective intelligence

    Eye quietness and quiet eye in expert and novice golf performance: an electrooculographic analysis

    Get PDF
    Quiet eye (QE) is the final ocular fixation on the target of an action (e.g., the ball in golf putting). Camerabased eye-tracking studies have consistently found longer QE durations in experts than novices; however, mechanisms underlying QE are not known. To offer a new perspective we examined the feasibility of measuring the QE using electrooculography (EOG) and developed an index to assess ocular activity across time: eye quietness (EQ). Ten expert and ten novice golfers putted 60 balls to a 2.4 m distant hole. Horizontal EOG (2ms resolution) was recorded from two electrodes placed on the outer sides of the eyes. QE duration was measured using a EOG voltage threshold and comprised the sum of the pre-movement and post-movement initiation components. EQ was computed as the standard deviation of the EOG in 0.5 s bins from –4 to +2 s, relative to backswing initiation: lower values indicate less movement of the eyes, hence greater quietness. Finally, we measured club-ball address and swing durations. T-tests showed that total QE did not differ between groups (p = .31); however, experts had marginally shorter pre-movement QE (p = .08) and longer post-movement QE (p < .001) than novices. A group × time ANOVA revealed that experts had less EQ before backswing initiation and greater EQ after backswing initiation (p = .002). QE durations were inversely correlated with EQ from –1.5 to 1 s (rs = –.48 - –.90, ps = .03 - .001). Experts had longer swing durations than novices (p = .01) and, importantly, swing durations correlated positively with post-movement QE (r = .52, p = .02) and negatively with EQ from 0.5 to 1s (r = –.63, p = .003). This study demonstrates the feasibility of measuring ocular activity using EOG and validates EQ as an index of ocular activity. Its findings challenge the dominant perspective on QE and provide new evidence that expert-novice differences in ocular activity may reflect differences in the kinematics of how experts and novices execute skills

    Affective automotive user interfaces

    Get PDF
    Technological progress in the fields of ubiquitous sensing and machine learning has been fueling the development of user-aware human-computer interaction in recent years. Especially natural user interfaces, like digital voice assistants, can benefit from understanding their users in order to provide a more naturalistic experience. Such systems can, for example, detect the emotional state of users and accordingly act in an empathic way. One major research field working on this topic is Affective Computing, where psycho-physiological measures, speech input, and facial expressions are used to sense human emotions. Affective data allows natural user interfaces to respond to emotions, providing promising perspectives not only for user experience design but also for safety aspects. In automotive environments, informed estimations of the driver’s state can potentially avoid dangerous errors and evoking positive emotions can improve the experience of driving. This dissertation explores Affective Automotive User Interfaces using two basic interaction paradigms: firstly, emotion regulation systems react to the current emotional state of the user based on live sensing data, allowing for quick interventions. Secondly, emotional interaction synthesizes experiences which resonate with the user on an emotional level. The constituted goals of these two interaction approaches are the promotion of safe behavior and an improvement of user experience. Promoting safe behavior through emotion regulation: Systems which detect and react to the driver’s state are expected to have great potential for improving road safety. This work presents a model and methods needed to investigate such systems and an exploration of several approaches to keep the driver in a safe state. The presented methods include techniques to induce emotions and to sample the emotional state of drivers. Three driving simulator studies investigate the impacts of emotionaware interventions in the form of implicit cues, visual mirroring and empathic speech synthesis. We envision emotion-awareness as a safety feature which can detect if a driver is unfit or in need of support, based on the propagation of robust emotion detection technology. Improving user experience with emotional interaction: Emotional perception is an essential part of user experience. This thesis entails methods to build emotional experiences derived from a variety of lab and simulator studies, expert feedback, car-storming sessions and design thinking workshops. Systems capable of adapting to the user’s preferences and traits in order to create an emotionally satisfactory user experience do not require the input of emotion detection. They rather create value through general knowledge about the user by adapting the output they generate. During this research, cultural and generational influences became evident, which have to be considered when implementing affective automotive user interfaces in future cars. We argue that the future of user-aware interaction lies in adapting not only to the driver’s preferences and settings but also to their current state. This paves the way for the regulation of safe behavior, especially in safety-critical environments like cars, and an improvement of the driving experience.Aktuelle Fortschritte in den Bereichen des Machine Learning und Ubiquitous Computing ermöglichen es heute adaptive Mensch-Maschine-Schnittstellen zu realisieren. Vor allem natürliche Interaktion, wie wir sie von Sprachassistenten kennen, profitiert von einem verbesserten VerstĂ€ndnis des Nutzerverhaltens. Zum Beispiel kann ein Assistent mit Informationen über den emotionalen Zustand des Nutzers natürlicher interagieren, vielleicht sogar Empathie zeigen. Affective Computing ist das damit verbundene Forschungsfeld, das sich damit beschĂ€ftigt menschliche Emotionen durch Beobachtung von physiologischen Daten, Sprache und Mimik zu erkennen. Dabei ermöglicht Emotionserkennung natürliche Interaktion auf Basis des Fahrer/innenzustands, was nicht nur vielversprechend in Bezug auf die Gestaltung des Nutzerelebnisses klingt, sondern auch Anwendungen im Bereich der Verkehrssicherheit hat. Ein Einsatz im Fahrkontext könnte so vermeidbare UnfĂ€lle verringern und gleichzeitig Fahrer durch emotionale Interaktion begeistern. Diese Dissertation beleuchtet Affective Automotive User Interfaces – zu Deutsch in etwa Emotionsadaptive Benutzerschnittstellen im Fahrzeug – auf Basis zweier inhaltlicher SĂ€ulen: erstens benutzen wir AnsĂ€tze zur Emotionsregulierung, um im Falle gefĂ€hrlicher FahrerzustĂ€nde einzugreifen. Zweitens erzeugen wir emotional aufgeladene Interaktionen, um das Nutzererlebnis zu verbessern. Erhöhte Sicherheit durch Emotionsregulierung: Emotionsadaptiven Systemen wird ein großes Potenzial zur Verbesserung der Verkehrssicherheit zugeschrieben. Wir stellen ein Modell und Methoden vor, die zur Untersuchung solcher Systeme benötigt werden und erforschen AnsĂ€tze, die dazu dienen Fahrer in einer Gefühlslage zu halten, die sicheres Handeln erlaubt. Die vorgestellten Methoden beinhalten AnsĂ€tze zur Emotionsinduktion und -erkennung, sowie drei Fahrsimulatorstudien zur Beeinflussung von Fahrern durch indirekte Reize, Spiegeln von Emotionen und empathischer Sprachinteraktion. Emotionsadaptive Sicherheitssysteme können in Zukunft beeintrĂ€chtigten Fahrern Unterstützung leisten und so den Verkehr sicherer machen, vorausgesetzt die technischen Grundlagen der Emotionserkennung gewinnen an Reife. Verbesserung des Nutzererlebnisses durch emotionale Interaktion: Emotionen tragen einen großen Teil zum Nutzerlebnis bei, darum ist es nur sinnvoll den zweiten Fokuspunkt dieser Arbeit auf systeminitiierte emotionale Interaktion zu legen.Wir stellen die Ergebnisse nutzerzentrierter Ideenfindung und mehrer Evaluationsstudien der resultierenden Systeme vor. Um sich den Vorlieben und Eigenschaften von Nutzern anzupassen wird nicht zwingend Emotionserkennung benötigt. Der Mehrwert solcher Systeme besteht vielmehr darin, auf Basis verfügbarer Verhaltensdaten ein emotional anspruchsvolles Erlebnis zu ermöglichen. In unserer Arbeit stoßen wir außerdem auf kulturelle und demografische Einflüsse, die es bei der Gestaltung von emotionsadaptiven Nutzerschnittstellen zu beachten gibt. Wir sehen die Zukunft nutzeradaptiver Interaktion im Fahrzeug nicht in einer rein verhaltensbasierten Anpassung, sondern erwarten ebenso emotionsbezogene Innovationen. Dadurch können zukünftige Systeme sicherheitsrelevantes Verhalten regulieren und gleichzeitig das Fortbestehen der Freude am Fahren ermöglichen

    Affective automotive user interfaces

    Get PDF
    Technological progress in the fields of ubiquitous sensing and machine learning has been fueling the development of user-aware human-computer interaction in recent years. Especially natural user interfaces, like digital voice assistants, can benefit from understanding their users in order to provide a more naturalistic experience. Such systems can, for example, detect the emotional state of users and accordingly act in an empathic way. One major research field working on this topic is Affective Computing, where psycho-physiological measures, speech input, and facial expressions are used to sense human emotions. Affective data allows natural user interfaces to respond to emotions, providing promising perspectives not only for user experience design but also for safety aspects. In automotive environments, informed estimations of the driver’s state can potentially avoid dangerous errors and evoking positive emotions can improve the experience of driving. This dissertation explores Affective Automotive User Interfaces using two basic interaction paradigms: firstly, emotion regulation systems react to the current emotional state of the user based on live sensing data, allowing for quick interventions. Secondly, emotional interaction synthesizes experiences which resonate with the user on an emotional level. The constituted goals of these two interaction approaches are the promotion of safe behavior and an improvement of user experience. Promoting safe behavior through emotion regulation: Systems which detect and react to the driver’s state are expected to have great potential for improving road safety. This work presents a model and methods needed to investigate such systems and an exploration of several approaches to keep the driver in a safe state. The presented methods include techniques to induce emotions and to sample the emotional state of drivers. Three driving simulator studies investigate the impacts of emotionaware interventions in the form of implicit cues, visual mirroring and empathic speech synthesis. We envision emotion-awareness as a safety feature which can detect if a driver is unfit or in need of support, based on the propagation of robust emotion detection technology. Improving user experience with emotional interaction: Emotional perception is an essential part of user experience. This thesis entails methods to build emotional experiences derived from a variety of lab and simulator studies, expert feedback, car-storming sessions and design thinking workshops. Systems capable of adapting to the user’s preferences and traits in order to create an emotionally satisfactory user experience do not require the input of emotion detection. They rather create value through general knowledge about the user by adapting the output they generate. During this research, cultural and generational influences became evident, which have to be considered when implementing affective automotive user interfaces in future cars. We argue that the future of user-aware interaction lies in adapting not only to the driver’s preferences and settings but also to their current state. This paves the way for the regulation of safe behavior, especially in safety-critical environments like cars, and an improvement of the driving experience.Aktuelle Fortschritte in den Bereichen des Machine Learning und Ubiquitous Computing ermöglichen es heute adaptive Mensch-Maschine-Schnittstellen zu realisieren. Vor allem natürliche Interaktion, wie wir sie von Sprachassistenten kennen, profitiert von einem verbesserten VerstĂ€ndnis des Nutzerverhaltens. Zum Beispiel kann ein Assistent mit Informationen über den emotionalen Zustand des Nutzers natürlicher interagieren, vielleicht sogar Empathie zeigen. Affective Computing ist das damit verbundene Forschungsfeld, das sich damit beschĂ€ftigt menschliche Emotionen durch Beobachtung von physiologischen Daten, Sprache und Mimik zu erkennen. Dabei ermöglicht Emotionserkennung natürliche Interaktion auf Basis des Fahrer/innenzustands, was nicht nur vielversprechend in Bezug auf die Gestaltung des Nutzerelebnisses klingt, sondern auch Anwendungen im Bereich der Verkehrssicherheit hat. Ein Einsatz im Fahrkontext könnte so vermeidbare UnfĂ€lle verringern und gleichzeitig Fahrer durch emotionale Interaktion begeistern. Diese Dissertation beleuchtet Affective Automotive User Interfaces – zu Deutsch in etwa Emotionsadaptive Benutzerschnittstellen im Fahrzeug – auf Basis zweier inhaltlicher SĂ€ulen: erstens benutzen wir AnsĂ€tze zur Emotionsregulierung, um im Falle gefĂ€hrlicher FahrerzustĂ€nde einzugreifen. Zweitens erzeugen wir emotional aufgeladene Interaktionen, um das Nutzererlebnis zu verbessern. Erhöhte Sicherheit durch Emotionsregulierung: Emotionsadaptiven Systemen wird ein großes Potenzial zur Verbesserung der Verkehrssicherheit zugeschrieben. Wir stellen ein Modell und Methoden vor, die zur Untersuchung solcher Systeme benötigt werden und erforschen AnsĂ€tze, die dazu dienen Fahrer in einer Gefühlslage zu halten, die sicheres Handeln erlaubt. Die vorgestellten Methoden beinhalten AnsĂ€tze zur Emotionsinduktion und -erkennung, sowie drei Fahrsimulatorstudien zur Beeinflussung von Fahrern durch indirekte Reize, Spiegeln von Emotionen und empathischer Sprachinteraktion. Emotionsadaptive Sicherheitssysteme können in Zukunft beeintrĂ€chtigten Fahrern Unterstützung leisten und so den Verkehr sicherer machen, vorausgesetzt die technischen Grundlagen der Emotionserkennung gewinnen an Reife. Verbesserung des Nutzererlebnisses durch emotionale Interaktion: Emotionen tragen einen großen Teil zum Nutzerlebnis bei, darum ist es nur sinnvoll den zweiten Fokuspunkt dieser Arbeit auf systeminitiierte emotionale Interaktion zu legen.Wir stellen die Ergebnisse nutzerzentrierter Ideenfindung und mehrer Evaluationsstudien der resultierenden Systeme vor. Um sich den Vorlieben und Eigenschaften von Nutzern anzupassen wird nicht zwingend Emotionserkennung benötigt. Der Mehrwert solcher Systeme besteht vielmehr darin, auf Basis verfügbarer Verhaltensdaten ein emotional anspruchsvolles Erlebnis zu ermöglichen. In unserer Arbeit stoßen wir außerdem auf kulturelle und demografische Einflüsse, die es bei der Gestaltung von emotionsadaptiven Nutzerschnittstellen zu beachten gibt. Wir sehen die Zukunft nutzeradaptiver Interaktion im Fahrzeug nicht in einer rein verhaltensbasierten Anpassung, sondern erwarten ebenso emotionsbezogene Innovationen. Dadurch können zukünftige Systeme sicherheitsrelevantes Verhalten regulieren und gleichzeitig das Fortbestehen der Freude am Fahren ermöglichen

    Learning through assessment

    Get PDF
    This book aims to contribute to the discourse of learning through assessment within a self-directed learning environment. It adds to the scholarship of assessment and self-directed learning within a face-to-face and online learning environment. As part of the NWU Self-Directed Learning Book Series, this book is devoted to scholarship in the field of self-directed learning, focusing on ongoing and envisaged assessment practices for self-directed learning through which learning within the 21st century can take place. This book acknowledges and emphasises the role of assessment as a pedagogical tool to foster self-directed learning during face-to-face and online learning situations. The way in which higher education conceptualises teaching, learning and assessment has been inevitably changed due to the COVID- 19 pandemic, and now more than ever we need learners to be self-directed in their learning. Assessment plays a key role in learning and, therefore, we have to identify innovative ways in which learning can be assessed, and which are likely to become the new norm even after the pandemic has been brought under control. The goal of this book, consisting of original research, is to assist with the paradigm shift regarding the purpose of assessment, as well as providing new ideas on assessment strategies, methods and tools appropriate to foster self-directed learning in all modes of delivery

    Motivation Modelling and Computation for Personalised Learning of People with Dyslexia

    Get PDF
    The increasing development of e-learning systems in recent decades has benefited ubiquitous computing and education by providing freedom of choice to satisfy various needs and preferences about learning places and paces. Automatic recognition of learners’ states is necessary for personalised services or intervention to be provided in e-learning environments. In current literature, assessment of learners’ motivation for personalised learning based on the motivational states is lacking. An effective learning environment needs to address learners’ motivational needs, particularly, for those with dyslexia. Dyslexia or other learning difficulties can cause young people not to engage fully with the education system or to drop out due to complex reasons: in addition to the learning difficulties related to reading, writing or spelling, psychological difficulties are more likely to be ignored such as lower academic self-worth and lack of learning motivation caused by the unavoidable learning difficulties. Associated with both cognitive processes and emotional states, motivation is a multi-facet concept that consequences in the continued intention to use an e-learning system and thus a better chance of learning effectiveness and success. It consists of factors from intrinsic motivation driven by learners’ inner feeling of interest or challenges and those from extrinsic motivation associated with external reward or compliments. These factors represent learners’ various motivational needs; thus, understanding this requires a multidisciplinary approach. Combining different perspectives of knowledge on psychological theories and technology acceptance models with the empirical findings from a qualitative study with dyslexic students conducted in the present research project, motivation modelling for people with dyslexia using a hybrid approach is the main focus of this thesis. Specifically, in addition to the contribution to the qualitative conceptual motivation model and ontology-based computational model that formally expresses the motivational factors affecting users’ continued intention to use e-learning systems, this thesis also conceives a quantitative approach to motivation modelling. A multi-item motivation questionnaire is designed and employed in a quantitative study with dyslexic students, and structural equation modelling techniques are used to quantify the influences of the motivational factors on continued use intention and their interrelationships in the model. In addition to the traditional approach to motivation computation that relies on learners’ self-reported data, this thesis also employs dynamic sensor data and develops classification models using logistic regression for real-time assessment of motivational states. The rule-based reasoning mechanism for personalising motivational strategies and a framework of motivationally personalised e-learning systems are introduced to apply the research findings to e-learning systems in real-world scenarios. The motivation model, sensor-based computation and rule-based personalisation have been applied to a practical scenario with an essential part incorporated in the prototype of a gaze-based learning application that can output personalised motivational strategies during the learning process according to the real-time assessment of learners’ motivational states based on both the eye-tracking data in addition to users’ self-reported data. Evaluation results have indicated the advantage of the application implemented compared to the traditional one without incorporating the present research findings for monitoring learners’ motivation states with gaze data and generating personalised feedback. In summary, the present research project has: 1) developed a conceptual motivation model for students with dyslexia defining the motivational factors that influence their continued intention to use e-learning systems based on both a qualitative empirical study and prior research and theories; 2) developed an ontology-based motivation model in which user profiles, factors in the motivation model and personalisation options are structured as a hierarchy of classes; 3) designed a multi-item questionnaire, conducted a quantitative empirical study, used structural equation modelling to further explore and confirm the quantified impacts of motivational factors on continued use intention and the quantified relationships between the factors; 4) conducted an experiment to exploit sensors for motivation computation, and developed classification models for real-time assessment of the motivational states pertaining to each factor in the motivation model based on empirical sensor data including eye gaze data and EEG data; 5) proposed a sensor-based motivation assessment system architecture with emphasis on the use of ontologies for a computational representation of the sensor features used for motivation assessment in addition to the representation of the motivation model, and described the semantic rule-based personalisation of motivational strategies; 6) proposed a framework of motivationally personalised e-learning systems based on the present research, with the prototype of a gaze-based learning application designed, implemented and evaluated to guide future work
    • 

    corecore