209 research outputs found

    Target Detection in Video Feeds with Selected Dyads and Groups Assisted by Collaborative Brain-Computer Interfaces

    Get PDF
    We present a collaborative Brain-Computer Interface (cBCI) to aid group decision-making based on realistic video feeds. The cBCI combines neural features extracted from EEG and response times to estimate the decision confidence of users. Confidence estimates are used to weigh individual responses and obtain group decisions. Results obtained with 10 participants indicate that cBCI groups are significantly more accurate than equally-sized groups using standard majority. Also, selecting dyads on the basis of the average performance of their members and then assisting them with our cBCI halves the error rates with respect to majority-based performance. Also, this allows most participants to be included in at least one selected dyad, hence being quite inclusive. Results indicate that this selection strategy makes cBCIs even more effective as methods for human augmentation in realistic scenarios

    Subject- and task-independent neural correlates and prediction of decision confidence in perceptual decision making

    Get PDF
    Objective. In many real-world decision tasks, the information available to the decision maker is incomplete. To account for this uncertainty, we associate a degree of confidence to every decision, representing the likelihood of that decision being correct. In this study, we analyse electroencephalography (EEG) data from 68 participants undertaking eight different perceptual decision-making experiments. Our goals are to investigate (1) whether subject- and task-independent neural correlates of decision confidence exist, and (2) to what degree it is possible to build brain computer interfaces that can estimate confidence on a trial-by-trial basis. The experiments cover a wide range of perceptual tasks, which allowed to separate the task-related, decision-making features from the task-independent ones. Approach. Our systems train artificial neural networks to predict the confidence in each decision from EEG data and response times. We compare the decoding performance with three training approaches: (1) single subject, where both training and testing data were acquired from the same person; (2) multi-subject, where all the data pertained to the same task, but the training and testing data came from different users; and (3) multi-task, where the training and testing data came from different tasks and subjects. Finally, we validated our multi-task approach using data from two additional experiments, in which confidence was not reported. Main results. We found significant differences in the EEG data for different confidence levels in both stimulus-locked and response-locked epochs. All our approaches were able to predict the confidence between 15% and 35% better than the corresponding reference baselines. Significance. Our results suggest that confidence in perceptual decision making tasks could be reconstructed from neural signals even when using transfer learning approaches. These confidence estimates are based on the decision-making process rather than just the confidence-reporting process

    A Design Exploration of Affective Gaming

    Get PDF
    Physiological sensing has been a prominent fixture in games user research (GUR) since the late 1990s, when researchers began to explore its potential to enhance and understand experience within digital game play. Since these early days, it has been widely argued that “affective gaming”—in which gameplay is influenced by a player’s emotional state—can enhance player experience by integrating physiological sensors into play. In this thesis, I conduct a design exploration of the field of affective gaming by first, systematically exploring the field and creating a framework (the affective game loop) to classify existing literature; and second by presenting two design probes, to probe and explore the design space of affective games contextualized within the affective game loop: In the Same Boat and Commons Sense. The systematic review explored this unique design space of affective gaming, opening up future avenues for exploration. The affective game loop was created as a way to classify the physiological signals and sensors most commonly used in prior literature within the context of how they are mapped into the gameplay itself. Findings suggest that the physiological input mappings can be more action-based (e.g., affecting mechanics in the game such as the movement of the character) or more context-based (e.g., affecting things like environmental or difficulty variables in the game). Findings also suggested that while the field has been around for decades, there is still yet to be any commercial successes, so does physiological interaction really heighten player experience? This question instigated the design of the two probes, exploring ways to implement these mappings and effectively heighten player experience. In the Same Boat (Design Probe One) is an embodied mirroring game designed to promote an intimate interaction, using players’ breathing rate and facial expressions to control movement of a canoe down a river. Findings suggest that playing In the Same Boat fostered the development of affiliation between the players, and that while embodied controls were less intuitive, people enjoyed them more, indicating the potential of embodied controls to foster social closeness in synchronized play over a distance. Commons Sense (Design Probe Two) is a communication modality intended to heighten audience engagement and effectively capture and communicate the audience experience, using a webcam-based heart rate detection software that takes an average of each spectator’s heart rate as input to affect in-game variables such as lighting and sound design, and game difficulty. Findings suggest that Commons Sense successfully facilitated the communication of audience response in an online entertainment context—where these social cues and signals are inherently diminished. In addition, Commons Sense is a communication modality that can both enhance a play experience while offering a novel way to communicate. Overall, findings from this design exploration shows that affective games offer a novel way to deliver a rich gameplay experience for the player

    Language learning and technology

    Get PDF
    By and large, languages, both as first, second or foreign languages remain one of the most important core subjects at every educational level. In early stages, their inclusion in the curriculum is intricately connected with (pre-)literacy practices, but also as a main driver for the successful integration of minority students learning a second language. In addition, the attainment of a certain level of a foreign language by the end of compulsory education is a common goal in most educational systems around the globe. Arguably, the key drivers of success in learning a language range from motivational to attitudinal, but ultimately they also have to do with the amount of target language use, the access to quality input, and especially language teachers' readiness to incorporate the latest educational trends effectively in the language classroom, educational technologies amongst them

    Instrumenting the Musician: Measuring and Enhancing A ective and Behavioural Interaction During Collaborative Music Making

    Get PDF
    Modern sensor technologies facilitate the measurement and interpretation of human affective and behavioural signals, and have consequently become widely used tools in the fields of affective computing, social signal processing and psychophysiology. This thesis investigates the use and development of these tools for measuring and enhancing aff ective and behavioural interaction during collaborative music making. Drawing upon work in the aforementioned fields, an exploratory study is designed, where self-report and continuous behavioural and physiological measures are collected from pairs of improvising percussionists. The findings lead to the selection of gaze, motion, and cardiac activity as input measures in the design of a device to enhance affective and behavioural interaction between co-present musicians. The device provides musicians with real-time visual feedback on the glances or body motions of their co-performers, whilst also recording cardiac activity as a potential measure of musical decision making processes. Quantitative evidence is found for the effects of this device on the communicative behaviours of collaborating musicians during an experiment designed to test the device in a controlled environment. This study also reports findings on discrete and time series relationships between cardiac activity and musical decision-making. A further, qualitative study is designed to evaluate the appropriation and impact of the device during long-term use in naturalistic settings. The results provide insights into earlier findings and contribute towards an empirical understanding of affective and behavioural interaction during collaborative music making, as well as implications for the design and deployment of sensor-based technologies to enhance such interactions. This thesis advances the dominant single-user paradigm within human-computer interaction and affective computing research, towards multi-user scenarios, where the concern is human-human interaction. It achieves this by focusing on the emotionally rich, and under-studied context of co-present musical collaboration; contributing new methods and findings that pave the way for further research and real-world applications.This work was funded by the Engineering and Physical Sciences Research Council (EPSRC) as part of the Centre for Doctoral Training in Media and Arts Technology at Queen Mary University of London (ref: EP/G03723X/1)

    Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods

    Get PDF
    Riedenklau E. Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods. Bielefeld: UniversitĂ€t Bielefeld; 2016.Making information understandable and literally graspable is the main goal of tangible interaction research. By giving digital data physical representations (Tangible User Interface Objects, or TUIOs), they can be used and manipulated like everyday objects with the users’ natural manipulation skills. Such physical interaction is basically of uni-directional kind, directed from the user to the system, limiting the possible interaction patterns. In other words, the system has no means to actively support the physical interaction. Within the frame of tabletop tangible user interfaces, this problem was addressed by the introduction of actuated TUIOs, that are controllable by the system. Within the frame of this thesis, we present the development of our own actuated TUIOs and address multiple interaction concepts we identified as research gaps in literature on actuated Tangible User Interfaces (TUIs). Gestural interaction is a natural means for humans to non-verbally communicate using their hands. TUIs should be able to support gestural interaction, since our hands are already heavily involved in the interaction. This has rarely been investigated in literature. For a tangible social network client application, we investigate two methods for collecting user-defined gestures that our system should be able to interpret for triggering actions. Versatile systems often understand a wide palette of commands. Another approach for triggering actions is the use of menus. We explore the design space of menu metaphors used in TUIs and present our own actuated dial-based approach. Rich interaction modalities may support the understandability of the represented data and make the interaction with them more appealing, but also mean high demands on real-time precessing. We highlight new research directions for integrated feature rich and multi-modal interaction, such as graphical display, sound output, tactile feedback, our actuated menu and automatically maintained relations between actuated TUIOs within a remote collaboration application. We also tackle the introduction of further sophisticated measures for the evaluation of TUIs to provide further evidence to the theories on tangible interaction. We tested our enhanced measures within a comparative study. Since one of the key factors in effective manual interaction is speed, we benchmarked both the human hand’s manipulation speed and compare it with the capabilities of our own implementation of actuated TUIOs and the systems described in literature. After briefly discussing applications that lie beyond the scope of this thesis, we conclude with a collection of design guidelines gathered in the course of this work and integrate them together with our findings into a larger frame

    Negative Multiplicity: Forecasting the Future Impact of Emerging Technologies on International Stability and Human Security

    Get PDF
    We asked 30 experts to forecast the developmental trajectories of twelve emerging technologies in the United States, Russia, and China until 2040 and to score their possible future impact on arms race stability, crisis stability, and humanitarian principles. The results reveal that, on average, their impact is expected to be negative, with some technologies negatively affecting all three dependent variables. We used a machine learning algorithm to cluster the technologies according to their anticipated impact. This process identified technology clusters comprised of diverse high-impact technologies that share key impact characteristics but do not necessarily share technical characteristics. We refer to these combined effects as ‘negative multiplicity’, reflecting the predominantly negative, concurrent, and in some cases similar, first- and second-order effects that emerging technologies are expected to have on international stability and human security. The expected alignment of the technology development trajectories of the United States, Russia, and China by 2040, in combination with the negative environment created by geopolitical competition, points to a nascent technological arms race that threatens to seriously impede international arms control efforts to regulate emerging technologies

    Enhancing Free-text Interactions in a Communication Skills Learning Environment

    Get PDF
    Learning environments frequently use gamification to enhance user interactions.Virtual characters with whom players engage in simulated conversations often employ prescripted dialogues; however, free user inputs enable deeper immersion and higher-order cognition. In our learning environment, experts developed a scripted scenario as a sequence of potential actions, and we explore possibilities for enhancing interactions by enabling users to type free inputs that are matched to the pre-scripted statements using Natural Language Processing techniques. In this paper, we introduce a clustering mechanism that provides recommendations for fine-tuning the pre-scripted answers in order to better match user inputs
    • 

    corecore