41 research outputs found

    Analyzing motivating functions of consumer behavior: Evidence from attention and neural responses to choices and consumption

    Get PDF
    Academia and business have shown an increased interest in using neurophysiological methods, such as eye-tracking and electroencephalography (EEG), to assess consumer motivation. The current research contributes to this literature by verifying whether these methods can predict the effects of antecedent events as motivating functions of attention, neural responses, choice, and consumption. Antecedent motivational factors are discussed, with a specific focus on deprivation as such a situational factor. Thirty-two participants were randomly assigned to the experimental and control conditions. Water deprivation of 11–12 h was used as an establishing operation to increase the reinforcing effectiveness of water. We designed three experimental sessions to capture the complexity of the relationship between antecedents and consumer behavior. Experimental manipulations in session 1 established the effectiveness of water for the experimental group and abolished it for the control group. Results from session 2 show that participants in the experimental group had significantly higher average fixation duration for the image of water. Their frontal asymmetry did not provide significant evidence of greater left frontal activation toward the water image. Session 3 demonstrated that choice and consumption behavior of the relevant reinforcer was significantly higher for participants in the experimental group. These early findings highlight the potential application of a multi-method approach using neurophysiological tools in consumer research, which provides a comprehensive picture of the functional relationship between motivating events, behavior (attention, neural responses, choice, and consumption), and consequences.publishedVersio

    Designing Text Entry Methods for Non-Verbal Vocal Input

    Get PDF
    Katedra počítačové grafiky a interakc

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    Efficient human-machine control with asymmetric marginal reliability input devices

    Get PDF
    Input devices such as motor-imagery brain-computer interfaces (BCIs) are often unreliable. In theory, channel coding can be used in the human-machine loop to robustly encapsulate intention through noisy input devices but standard feedforward error correction codes cannot be practically applied. We present a practical and general probabilistic user interface for binary input devices with very high noise levels. Our approach allows any level of robustness to be achieved, regardless of noise level, where reliable feedback such as a visual display is available. In particular, we show efficient zooming interfaces based on feedback channel codes for two-class binary problems with noise levels characteristic of modalities such as motor-imagery based BCI, with accuracy <75%. We outline general principles based on separating channel, line and source coding in human-machine loop design. We develop a novel selection mechanism which can achieve arbitrarily reliable selection with a noisy two-state button. We show automatic online adaptation to changing channel statistics, and operation without precise calibration of error rates. A range of visualisations are used to construct user interfaces which implicitly code for these channels in a way that it is transparent to users. We validate our approach with a set of Monte Carlo simulations, and empirical results from a human-in-the-loop experiment showing the approach operates effectively at 50-70% of the theoretical optimum across a range of channel conditions

    Improved Brain-Computer Interface Methods with Application to Gaming

    Get PDF

    Supporting the Development Process of Multimodal and Natural Automotive User Interfaces

    Get PDF
    Nowadays, driving a car places multi-faceted demands on the driver that go beyond maneuvering a vehicle through road traffic. The number of additional functions for entertainment, infotainment and comfort increased rapidly in the last years. Each new function in the car is designed to make driving as pleasant as possible but also increases the risk that the driver will be distracted from the primary driving task. One of the most important goals for designers of new and innovative automotive user interfaces is therefore to keep driver distraction to a minimum while providing an appropriate support to the driver. This goal can be achieved by providing tools and methods that support a human-centred development process. In this dissertation, a design space will be presented that helps to analyze the use of context, to generate new ideas for automotive user interfaces and to document them. Furthermore, new opportunities for rapid prototyping will be introduced. To be able to evaluate new automotive user interfaces and interaction concepts regarding their effect on driving performance, a driving simulation software was developed within the scope of this dissertation. In addition, research results in the field of multimodal, implicit and eye-based interaction in the car are presented. The different case studies mentioned illustrate the systematic and comprehensive research on the opportunities of these kinds of interaction, as well as their effects on driving performance. We developed a prototype of a vibration steering wheel that communicates navigation instructions. Another prototype of a steering wheel has a display integrated in the middle and enables handwriting input. A further case study explores a visual placeholder concept to assist drivers when using in-car displays while driving. When a driver looks at a display and then at the street, the last gaze position on the display is highlighted to assist the driver when he switches his attention back to the display. This speeds up the process of resuming an interrupted task. In another case study, we compared gaze-based interaction with touch and speech input. In the last case study, a driver-passenger video link system is introduced that enables the driver to have eye contact with the passenger without turning his head. On the whole, this dissertation shows that by using a new human-centred development process, modern interaction concepts can be developed in a meaningful way.Das Führen eines Fahrzeuges stellt heute vielfältige Ansprüche an den Fahrer, die über das reine Manövrieren im Straßenverkehr hinausgehen. Die Fülle an Zusatzfunktionen zur Unterhaltung, Navigation- und Komfortzwecken, die während der Fahrt genutzt werden können, ist in den letzten Jahren stark angestiegen. Einerseits dient jede neu hinzukommende Funktion im Fahrzeug dazu, das Fahren so angenehm wie möglich zu gestalten, birgt aber anderseits auch immer das Risiko, den Fahrer von seiner primären Fahraufgabe abzulenken. Eines der wichtigsten Ziele für Entwickler von neuen und innovativen Benutzungsschnittstellen im Fahrzeug ist es, die Fahrerablenkung so gering wie möglich zu halten und dabei dem Fahrer eine angemessene Unterstützung zu bieten. Werkzeuge und Methoden, die einen benutzerzentrierten Entwicklungsprozess unter-stützen, können helfen dieses Ziel zu erreichen. In dieser Dissertation wird ein Entwurfsraum vorgestellt, welcher helfen soll den Benutzungskontext zu analysieren, neue Ideen für Benutzungsschnittstellen zu generieren und diese zu dokumentieren. Darüber hinaus wurden im Rahmen der Arbeit neue Möglichkeiten zur schnellen Prototypenerstellung entwickelt. Es wurde ebenfalls eine Fahrsimulationssoftware erstellt, welche die quantitative Bewertung der Auswirkungen von Benutzungs-schnittstellen und Interaktionskonzepten auf die Fahreraufgabe ermöglicht. Desweiteren stellt diese Dissertation neue Forschungsergebnisse auf den Gebieten der multimodalen, impliziten und blickbasierten Interaktion im Fahrzeug vor. In verschiedenen Fallbeispielen wurden die Möglichkeiten dieser Interaktionsformen sowie deren Auswirkung auf die Fahrerablenkung umfassend und systematisch untersucht. Es wurde ein Prototyp eines Vibrationslenkrads erstellt, womit Navigations-information übermittelt werden können sowie ein weiterer Prototyp eines Lenkrads, welches ein Display in der Mitte integriert hat und damit handschriftliche Texteingabe ermöglicht. Ein visuelles Platzhalterkonzept ist im Fokus eines weiteren Fallbeispiels. Auf einem Fahrzeugdisplay wird die letzte Blickposition bevor der Fahrer seine Aufmerksamkeit dem Straßenverkehr zuwendet visuell hervorgehoben. Dies ermöglicht dem Fahrer eine unterbrochene Aufgabe z.B. das Durchsuchen einer Liste von Musik-titel schneller wieder aufzunehmen, wenn er seine Aufmerksamkeit wieder dem Display zuwendet. In einer weiteren Studie wurde blickbasierte Interaktion mit Sprach- und Berührungseingabe verglichen und das letzte Fallbeispiel beschäftigt sich mit der Unterstützung der Kommunikation im Fahrzeug durch die Bereitstellung eines Videosystems, welches Blickkontakt zwischen dem Fahrer und den Mitfahrern ermöglicht, ohne dass der Fahrer seinen Kopf drehen muss. Die Arbeit zeigt insgesamt, dass durch den Einsatz eines neuen benutzerzentrierten Entwicklungsprozess moderne Interaktionskonzept sinnvoll entwickelt werden können

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    From sequences to cognitive structures : neurocomputational mechanisms

    Get PDF
    Ph. D. Thesis.Understanding how the brain forms representations of structured information distributed in time is a challenging neuroscientific endeavour, necessitating computationally and neurobiologically informed study. Human neuroimaging evidence demonstrates engagement of a fronto-temporal network, including ventrolateral prefrontal cortex (vlPFC), during language comprehension. Corresponding regions are engaged when processing dependencies between word-like items in Artificial Grammar (AG) paradigms. However, the neurocomputations supporting dependency processing and sequential structure-building are poorly understood. This work aimed to clarify these processes in humans, integrating behavioural, electrophysiological and computational evidence. I devised a novel auditory AG task to assess simultaneous learning of dependencies between adjacent and non-adjacent items, incorporating learning aids including prosody, feedback, delineated sequence boundaries, staged pre-exposure, and variable intervening items. Behavioural data obtained in 50 healthy adults revealed strongly bimodal performance despite these cues. Notably, however, reaction times revealed sensitivity to the grammar even in low performers. Behavioural and intracranial electrode data was subsequently obtained in 12 neurosurgical patients performing this task. Despite chance behavioural performance, time- and time-frequency domain electrophysiological analysis revealed selective responsiveness to sequence grammaticality in regions including vlPFC. I developed a novel neurocomputational model (VS-BIND: “Vector-symbolic Sequencing of Binding INstantiating Dependencies”), triangulating evidence to clarify putative mechanisms in the fronto-temporal language network. I then undertook multivariate analyses on the AG task neural data, revealing responses compatible with the presence of ordinal codes in vlPFC, consistent with VS-BIND. I also developed a novel method of causal analysis on multivariate patterns, representational Granger causality, capable of detecting flow of distinct representations within the brain. This alluded to top-down transmission of syntactic predictions during the AG task, from vlPFC to auditory cortex, largely in the opposite direction to stimulus encodings, consistent with predictive coding accounts. It finally suggested roles for the temporoparietal junction and frontal operculum during grammaticality processing, congruent with prior literature. This work provides novel insights into the neurocomputational basis of cognitive structure-building, generating hypotheses for future study, and potentially contributing to AI and translational efforts.Wellcome Trust, European Research Counci
    corecore