562 research outputs found

    How robust is the language architecture? The case of mood

    Get PDF
    In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that “David praised Linda because … ” would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., “The boys turns”). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying “cold” cognitive functions in relation to “hot” aspects of the brain

    Know Thyself: Improving Interoceptive Ability Through Ambient Biofeedback in the Workplace

    Get PDF
    Interoception, the perception of the body’s internal state, is intimately connected to self-regulation and wellbeing. Grounded in the affective science literature, we design an ambient biofeedback system called Soni-Phy and a lab study to investigate whether, when and how an unobtrusive biofeedback system can be used to improve interoceptive sensibility and accuracy by amplifying a users’ internal state. This research has practical significance for the design and improvement of assistive technologies for the workplace

    Embedding mobile learning into everyday life settings

    Get PDF
    The increasing ubiquity of smartphones has changed the way we interact with information and acquire new knowledge. The prevalence of personal mobile devices in our everyday lives creates new opportunities for learning that exceed the narrow boundaries of a school’s classroom and provide the foundations for lifelong learning. Learning can now happen whenever and wherever we are; whether on the sofa at home, on the bus during our commute, or on a break at work. However, the flexibility offered by mobile learning also creates its challenges. Being able to learn anytime and anywhere does not necessarily result in learning uptake. Without the school environment’s controlled schedule and teacher guidance, the learners must actively initiate learning activities, keep up repetition schedules, and cope with learning in interruption-prone everyday environments. Both interruptions and infrequent repetition can harm the learning process and long-term memory retention. We argue that current mobile learning applications insufficiently support users in coping with these challenges. In this thesis, we explore how we can utilize the ubiquity of mobile devices to ensure frequent engagement with the content, focusing primarily on language learning and supporting users in dealing with learning breaks and interruptions. Following a user-centered design approach, we first analyzed mobile learning behavior in everyday settings. Based on our findings, we proposed concepts and designs, developed research prototypes, and evaluated them in laboratory and field evaluations with a specific focus on user experience. To better understand users’ learning behavior with mobile devices, we first characterized their interaction with mobile learning apps through a detailed survey and a diary study. Both methods confirmed the enormous diversity in usage situations and preferences. We observed that learning often happens unplanned, infrequently, among the company of friends or family, or while simultaneously performing secondary tasks such as watching TV or eating. The studies further uncovered a significant prevalence of interruptions in everyday settings that affected users’ learning behavior, often leading to suspension and termination of the learning activities. We derived design implications to support learning in diverse situations, particularly aimed at mitigating the adverse effects of multitasking and interruptions. The proposed strategies should help designers and developers create mobile learning applications that adapt to the opportunities and challenges of learning in everyday mobile settings. We explored four main challenges, emphasizing that (1) we need to consider that Learning in Everyday Settings is Diverse and Interruption-prone, (2) learning performance is affected by Irregular and Infrequent Practice Behavior, (3) we need to move From Static to Personalized Learning, and (4) that Interruptions and Long Learning Breaks can Negatively Affect Performance. To tackle these challenges, we propose to embed learning into everyday smartphone interactions, which could foster frequent engagement with – and implicitly personalize – learning content (according to users’ interests and skills). Further, we investigate how memory cues could be applied to support task resumption after interruptions in mobile learning. To confirm that our idea of embedding learning into everyday interactions can increase exposure, we developed an application integrating learning tasks into the smartphone authentication process. Since unlocking the smartphone is a frequently performed action without any other purpose, our subjects appreciated the idea of utilizing this process to perform quick and simple learning interactions. Evidence from a comparative user study showed that embedding learning tasks into the unlocking mechanism led to significantly more interactions with the learning content without impairing the learning quality. We further explored a method for embedding language comprehension assessment into users’ digital reading and listening activities. By applying physiological measurements as implicit input, we reliably detected unknown words during laboratory evaluations. Identifying such knowledge gaps could be used for the provision of in-situ support and to inform the generation of personalized language learning content tailored to users’ interests and proficiency levels. To investigate memory cueing as a concept to support task resumption after interruptions, we complemented a theoretical literature analysis of existing applications with two research probes implementing and evaluating promising design concepts. We showed that displaying memory cues when the user resumes the learning activity after an interruption improves their subjective user experience. A subsequent study presented an outlook on the generalizability of memory cues beyond the narrow use case of language learning. We observed that the helpfulness of memory cues for reflecting on prior learning is highly dependent on the design of the cues, particularly the granularity of the presented information. We consider interactive cues for specific memory reactivation (e.g., through multiple-choice questions) a promising scaffolding concept for connecting individual micro-learning sessions when learning in everyday settings. The tools and applications described in this thesis are a starting point for designing applications that support learning in everyday settings. We broaden the understanding of learning behavior and highlight the impact of interruptions in our busy everyday lives. While this thesis focuses mainly on language learning, the concepts and methods have the potential to be generalized to other domains, such as STEM learning. We reflect on the limitations of the presented concepts and outline future research perspectives that utilize the ubiquity of mobile devices to design mobile learning interactions for everyday settings.Die Allgegenwärtigkeit von Smartphones verändert die Art und Weise wie wir mit Informationen umgehen und Wissen erwerben. Die weite Verbreitung von mobilen Endgeräten in unserem täglichen Leben führt zu neuen Möglichkeiten des Lernens, welche über die engen Grenzen eines Klassenraumes hinausreichen und das Fundament für lebenslanges Lernen schaffen. Lernen kann nun zu jeder Zeit und an jedem Ort stattfinden: auf dem Sofa Zuhause, im Bus während des Pendelns oder in der Pause auf der Arbeit. Die Flexibilität des mobilen Lernens geht jedoch zeitgleich mit Herausforderungen einher. Ohne den kontrollierten Ablaufplan und die Unterstützung der Lehrpersonen im schulischen Umfeld sind die Lernenden selbst dafür verantwortlich, aktiv Lernsitzungen zu initiieren, Wiederholungszyklen einzuhalten und Lektionen in unterbrechungsanfälligen Alltagssituationen zu meistern. Sowohl Unterbrechungen als auch unregelmäßige Wiederholung von Inhalten können den Lernprozess behindern und der Langzeitspeicherung der Informationen schaden. Wir behaupten, dass aktuelle mobile Lernanwendungen die Nutzer*innen nur unzureichend in diesen Herausforderungen unterstützen. In dieser Arbeit erforschen wir, wie wir uns die Allgegenwärtigkeit mobiler Endgeräte zunutze machen können, um zu erreichen, dass Nutzer*innen regelmäßig mit den Lerninhalten interagieren. Wir fokussieren uns darauf, sie im Umgang mit Unterbrechungen und Lernpausen zu unterstützen. In einem nutzerzentrierten Designprozess analysieren wir zunächst das Lernverhalten auf mobilen Endgeräten in alltäglichen Situationen. Basierend auf den Erkenntnissen schlagen wir Konzepte und Designs vor, entwickeln Forschungsprototypen und werten diese in Labor- und Feldstudien mit Fokus auf User Experience (wörtl. “Nutzererfahrung”) aus. Um das Lernverhalten von Nutzer*innen mit mobilen Endgeräten besser zu verstehen, versuchen wir zuerst die Interaktionen mit mobilen Lernanwendungen durch eine detaillierte Umfrage und eine Tagebuchstudie zu charakterisieren. Beide Methoden bestätigen eine enorme Vielfalt von Nutzungssituationen und -präferenzen. Wir beobachten, dass Lernen oft ungeplant, unregelmäßig, im Beisein von Freunden oder Familie, oder während der Ausübung anderer Tätigkeiten, beispielsweise Fernsehen oder Essen, stattfindet. Die Studien decken zudem Unterbrechungen in Alltagssituationen auf, welche das Lernverhalten der Nutzer*innen beeinflussen und oft zum Aussetzen oder Beenden der Lernaktivität führen. Wir leiten Implikationen ab, um Lernen in vielfältigen Situationen zu unterstützen und besonders die negativen Einflüsse von Multitasking und Unterbrechungen abzuschwächen. Die vorgeschlagenen Strategien sollen Designer*innen und Entwickler*innen helfen, mobile Lernanwendungen zu erstellen, welche sich den Möglichkeiten und Herausforderungen von Lernen in Alltagssituationen anpassen. Wir haben vier zentrale Herausforderungen identifiziert: (1) Lernen in Alltagssituationen ist divers und anfällig für Unterbrechungen; (2) Die Lerneffizienz wird durch unregelmäßiges Wiederholungsverhalten beeinflusst; (3) Wir müssen von statischem zu personalisiertem Lernen übergehen; (4) Unterbrechungen und lange Lernpausen können dem Lernen schaden. Um diese Herausforderungen anzugehen, schlagen wir vor, Lernen in alltägliche Smartphoneinteraktionen einzubetten. Dies führt zu einer vermehrten Beschäftigung mit Lerninhalten und könnte zu einer impliziten Personalisierung von diesen anhand der Interessen und Fähigkeiten der Nutzer*innen beitragen. Zudem untersuchen wir, wie Memory Cues (wörtl. “Gedächtnishinweise”) genutzt werden können, um das Fortsetzen von Aufgaben nach Unterbrechungen im mobilen Lernen zu erleichtern. Um zu zeigen, dass unsere Idee des Einbettens von Lernaufgaben in alltägliche Interaktionen wirklich die Beschäftigung mit diesen erhöht, haben wir eine Anwendung entwickelt, welche Lernaufgaben in den Entsperrprozess von Smartphones integriert. Da die Authentifizierung auf dem Mobilgerät eine häufig durchgeführte Aktion ist, welche keinen weiteren Mehrwert bietet, begrüßten unsere Studienteilnehmenden die Idee, den Prozess für die Durchführung kurzer und einfacher Lerninteraktionen zu nutzen. Ergebnisse aus einer vergleichenden Nutzerstudie haben gezeigt, dass die Einbettung von Aufgaben in den Entsperrprozess zu signifikant mehr Interaktionen mit den Lerninhalten führt, ohne dass die Lernqualität beeinträchtigt wird. Wir haben außerdem eine Methode untersucht, welche die Messung von Sprachverständnis in die digitalen Lese- und Höraktivitäten der Nutzer*innen einbettet. Mittels physiologischer Messungen als implizite Eingabe können wir in Laborstudien zuverlässig unbekannte Wörter erkennen. Die Aufdeckung solcher Wissenslücken kann genutzt werden, um in-situ Untestützung bereitzustellen und um personalisierte Lerninhalte zu generieren, welche auf die Interessen und das Wissensniveau der Nutzer*innen zugeschnitten sind. Um Memory Cues als Konzept für die Unterstützung der Aufgabenfortsetzung nach Unterbrechungen zu untersuchen, haben wir eine theoretische Literaturanalyse von bestehenden Anwendungen um zwei Forschungsarbeiten erweitert, welche vielversprechende Designkonzepte umsetzen und evaluieren. Wir haben gezeigt, dass die Präsentation von Memory Cues die subjektive User Experience verbessert, wenn der Nutzer die Lernaktivität nach einer Unterbrechung fortsetzt. Eine Folgestudie stellt einen Ausblick auf die Generalisierbarkeit von Memory Cues dar, welcher über den Tellerrand des Anwendungsfalls Sprachenlernen hinausschaut. Wir haben beobachtet, dass der Nutzen von Memory Cues für das Reflektieren über gelernte Inhalte stark von dem Design der Cues abhängt, insbesondere von der Granularität der präsentierten Informationen. Wir schätzen interaktive Cues zur spezifischen Gedächtnisaktivierung (z.B. durch Mehrfachauswahlfragen) als einen vielversprechenden Unterstützungsansatz ein, welcher individuelle Mikrolerneinheiten im Alltag verknüpfen könnte. Die Werkzeuge und Anwendungen, die in dieser Arbeit beschrieben werden, sind ein Startpunkt für das Design von Anwendungen, welche das Lernen in Alltagssituationen unterstützen. Wir erweitern das Verständnis, welches wir von Lernverhalten im geschäftigen Alltagsleben haben und heben den Einfluss von Unterbrechungen in diesem hervor. Während sich diese Arbeit hauptsächlich auf das Lernen von Sprachen fokussiert, haben die vorgestellten Konzepte und Methoden das Potential auf andere Bereiche übertragen zu werden, beispielsweise das Lernen von MINT Themen. Wir reflektieren über die Grenzen der präsentierten Konzepte und skizzieren Perspektiven für zukünftige Forschungsarbeiten, welche sich die Allgegenwärtigkeit von mobilen Endgeräten zur Gestaltung von Lernanwendungen für den Alltag zunutze machen

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Motivation Modelling and Computation for Personalised Learning of People with Dyslexia

    Get PDF
    The increasing development of e-learning systems in recent decades has benefited ubiquitous computing and education by providing freedom of choice to satisfy various needs and preferences about learning places and paces. Automatic recognition of learners’ states is necessary for personalised services or intervention to be provided in e-learning environments. In current literature, assessment of learners’ motivation for personalised learning based on the motivational states is lacking. An effective learning environment needs to address learners’ motivational needs, particularly, for those with dyslexia. Dyslexia or other learning difficulties can cause young people not to engage fully with the education system or to drop out due to complex reasons: in addition to the learning difficulties related to reading, writing or spelling, psychological difficulties are more likely to be ignored such as lower academic self-worth and lack of learning motivation caused by the unavoidable learning difficulties. Associated with both cognitive processes and emotional states, motivation is a multi-facet concept that consequences in the continued intention to use an e-learning system and thus a better chance of learning effectiveness and success. It consists of factors from intrinsic motivation driven by learners’ inner feeling of interest or challenges and those from extrinsic motivation associated with external reward or compliments. These factors represent learners’ various motivational needs; thus, understanding this requires a multidisciplinary approach. Combining different perspectives of knowledge on psychological theories and technology acceptance models with the empirical findings from a qualitative study with dyslexic students conducted in the present research project, motivation modelling for people with dyslexia using a hybrid approach is the main focus of this thesis. Specifically, in addition to the contribution to the qualitative conceptual motivation model and ontology-based computational model that formally expresses the motivational factors affecting users’ continued intention to use e-learning systems, this thesis also conceives a quantitative approach to motivation modelling. A multi-item motivation questionnaire is designed and employed in a quantitative study with dyslexic students, and structural equation modelling techniques are used to quantify the influences of the motivational factors on continued use intention and their interrelationships in the model. In addition to the traditional approach to motivation computation that relies on learners’ self-reported data, this thesis also employs dynamic sensor data and develops classification models using logistic regression for real-time assessment of motivational states. The rule-based reasoning mechanism for personalising motivational strategies and a framework of motivationally personalised e-learning systems are introduced to apply the research findings to e-learning systems in real-world scenarios. The motivation model, sensor-based computation and rule-based personalisation have been applied to a practical scenario with an essential part incorporated in the prototype of a gaze-based learning application that can output personalised motivational strategies during the learning process according to the real-time assessment of learners’ motivational states based on both the eye-tracking data in addition to users’ self-reported data. Evaluation results have indicated the advantage of the application implemented compared to the traditional one without incorporating the present research findings for monitoring learners’ motivation states with gaze data and generating personalised feedback. In summary, the present research project has: 1) developed a conceptual motivation model for students with dyslexia defining the motivational factors that influence their continued intention to use e-learning systems based on both a qualitative empirical study and prior research and theories; 2) developed an ontology-based motivation model in which user profiles, factors in the motivation model and personalisation options are structured as a hierarchy of classes; 3) designed a multi-item questionnaire, conducted a quantitative empirical study, used structural equation modelling to further explore and confirm the quantified impacts of motivational factors on continued use intention and the quantified relationships between the factors; 4) conducted an experiment to exploit sensors for motivation computation, and developed classification models for real-time assessment of the motivational states pertaining to each factor in the motivation model based on empirical sensor data including eye gaze data and EEG data; 5) proposed a sensor-based motivation assessment system architecture with emphasis on the use of ontologies for a computational representation of the sensor features used for motivation assessment in addition to the representation of the motivation model, and described the semantic rule-based personalisation of motivational strategies; 6) proposed a framework of motivationally personalised e-learning systems based on the present research, with the prototype of a gaze-based learning application designed, implemented and evaluated to guide future work

    Workload-aware systems and interfaces for cognitive augmentation

    Get PDF
    In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable. Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation. This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems. Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.Tagtäglich werden unsere kognitiven Fähigkeiten durch die Verarbeitung von unzähligen Informationen in Anspruch genommen. Dies kann die Schwierigkeit einer Aufgabe durch mehr oder weniger Arbeitslast beeinflussen. Der menschliche Körper drückt die Nutzung kognitiver Ressourcen durch physiologische Reaktionen aus, wenn dieser mit kognitiver Arbeitsbelastung konfrontiert oder überfordert wird. Dadurch werden weitere Ressourcen mobilisiert, um die Arbeitsbelastung vorübergehend zu bewältigen. Wir prognostizieren, dass die derzeitige Entwicklung physiologischer Messverfahren kognitive Leistungsmessungen stets möglich machen wird, um die kognitive Arbeitslast des Nutzers jederzeit zu messen. Diese sind in der Lage, einzugreifen wenn eine zu hohe oder zu niedrige kognitive Belastung erkannt wird. Wir konzentrieren uns zunächst auf die Erkennung passender Momente für kognitive Unterstützung welche sich der gegenwärtigen kognitiven Arbeitslast bewusst sind. Anschließend untersuchen wir in einem nutzerzentrierten Designprozess geeignete Feedbackmechanismen, die zur kognitiven Assistenz beitragen. Wir präsentieren Designanforderungen, welche zeigen wie Schnittstellen eine kognitive Augmentierung durch die Messung kognitiver Arbeitslast erreichen können. Anschließend untersuchen wir verschiedene physiologische Messmodalitäten, welche Bewertungen der kognitiven Arbeitsbelastung in Realzeit ermöglichen. Zunächst validieren wir empirisch, dass das menschliche Gehirn auf kognitive Arbeitslast reagiert. Es zeigt sich, dass die Ableitung der kognitiven Arbeitsbelastung über Elektroenzephalographie eine geeignete Methode ist, um den kognitiven Anspruch neuartiger Assistenzsysteme zu evaluieren. Anschließend verwenden wir Eye-Tracking, um Veränderungen in den Augenbewegungen und dem Durchmesser der Pupille unter verschiedenen Intensitäten kognitiver Arbeitslast zu bewerten. Das Anwenden von maschinellem Lernen führt zu zuverlässigen Echtzeit-Bewertungen kognitiver Arbeitsbelastung. Auf der Grundlage der bisherigen Forschungsarbeiten stellen wir Anwendungen vor, welche die Kognition im häuslichen und beruflichen Umfeld unterstützen. Die physiologischen Messungen stellen fest, wann eine kognitive Augmentierung sich als günstig erweist. In einer Feldstudie setzen wir ein Assistenzsystem ein, um die erhobenen Designanforderungen zur Reduktion kognitiver Arbeitslast zu validieren. Unsere Ergebnisse zeigen, dass die Arbeitsbelastung durch den Einsatz von Assistenzsystemen reduziert wird. Im Anschluss untersuchen wir, wie kognitive Arbeitsbelastung visualisiert werden kann. Wir stellen eine Implementierung einer Biofeedback-Visualisierung vor, die das Nutzerverständnis zum Verlauf und zur Entstehung von kognitiver Arbeitslast unterstützt. Eine abschließende Studie zeigt, wie Messungen kognitiver Arbeitslast zur Vorhersage der aktuellen Leseeffizienz benutzt werden können. Wir schließen hierbei mit einer Reihe von Applikationen ab, welche sich kognitive Arbeitslast als Eingabe zunutze machen. Die vorliegende wissenschaftliche Arbeit befasst sich mit dem Design von Assistenzsystemen, welche die kognitive Arbeitslast der Nutzer implizit erfasst und diese bei der Durchführung alltäglicher Aufgaben unterstützt. Dabei werden physiologische Daten erfasst, um Rückschlüsse in Realzeit auf die derzeitige kognitive Arbeitsbelastung zu erlauben. Anschließend werden diese Daten analysiert, um dem Nutzer strategisch zu assistieren. Das Ziel dieser Arbeit ist die Erweiterung neuartiger und bestehender kontextbewusster Benutzerschnittstellen um den Faktor kognitive Arbeitslast. Daher werden in dieser Arbeit arbeitslastbewusste Systeme und arbeitslastbewusste Benutzerschnittstellen als eine zusätzliche Dimension innerhalb des Paradigmas kontextbewusster Systeme präsentiert. Wir stellen acht Forschungsstudien vor, um die Designanforderungen und die Implementierung von kognitiv arbeitslastbewussten Systemen zu untersuchen. Schließlich stellen wir unsere Vision von zukünftigen kognitiven arbeitslastbewussten Systemen und Benutzerschnittstellen vor. Durch die knappe Verfügbarkeit öffentlich zugänglicher Datensätze, Referenzimplementierungen, und Methoden, waren Kontextbewusste Systeme in der Auswertung kognitiver Arbeitslast bezüglich der Nutzerinteraktion limitiert. Ergänzt durch die in dieser Arbeit gesammelten Datensätze erwarten wir, dass diese Arbeit den Weg für methodische und technische Werkzeuge ebnet, welche kognitive Arbeitslast als Faktor in das Kontextbewusstsein von Computersystemen integriert

    Neuroeconomics: How Neuroscience Can Inform Economics

    Get PDF
    Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified ß-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe
    corecore