50 research outputs found

    Low-cost methodologies and devices applied to measure, model and self-regulate emotions for Human-Computer Interaction

    Get PDF
    En aquesta tesi s'exploren les diferents metodologies d'anàlisi de l'experiència UX des d'una visió centrada en usuari. Aquestes metodologies clàssiques i fonamentades només permeten extreure dades cognitives, és a dir les dades que l'usuari és capaç de comunicar de manera conscient. L'objectiu de la tesi és proposar un model basat en l'extracció de dades biomètriques per complementar amb dades emotives (i formals) la informació cognitiva abans esmentada. Aquesta tesi no és només teòrica, ja que juntament amb el model proposat (i la seva evolució) es mostren les diferents proves, validacions i investigacions en què s'han aplicat, sovint en conjunt amb grups de recerca d'altres àrees amb èxit.En esta tesis se exploran las diferentes metodologías de análisis de la experiencia UX desde una visión centrada en usuario. Estas metodologías clásicas y fundamentadas solamente permiten extraer datos cognitivos, es decir los datos que el usuario es capaz de comunicar de manera consciente. El objetivo de la tesis es proponer un modelo basado en la extracción de datos biométricos para complementar con datos emotivos (y formales) la información cognitiva antes mencionada. Esta tesis no es solamente teórica, ya que junto con el modelo propuesto (y su evolución) se muestran las diferentes pruebas, validaciones e investigaciones en la que se han aplicado, a menudo en conjunto con grupos de investigación de otras áreas con éxito.In this thesis, the different methodologies for analyzing the UX experience are explored from a user-centered perspective. These classical and well-founded methodologies only allow the extraction of cognitive data, that is, the data that the user is capable of consciously communicating. The objective of this thesis is to propose a methodology that uses the extraction of biometric data to complement the aforementioned cognitive information with emotional (and formal) data. This thesis is not only theoretical, since the proposed model (and its evolution) is complemented with the different tests, validations and investigations in which they have been applied, often in conjunction with research groups from other areas with success

    ICT and gamified learning in tourism education: a case of South African secondary schools

    Get PDF
    Tourism is often introduced as a subject in formal education curricula because of the increasing and significant economic contribution of the tourism industry to the private and public sector. This is especially the case in emerging economies in Asia and Africa (Hsu, 2015; Mayaka & Akama, 2015; Cuffy et al., 2012). Tourism in South Africa – which is the geographical setting of this research – is recognised as a key economic sector. At secondary level, tourism has been widely introduced at schools throughout South Africa since 2000 and has experienced significant growth (Umalusi, 2014). Furthermore, information and communication technology (ICT) has rapidly penetrated public and private sectors of the country. ICT affords novel opportunities for social and economic development, and this has especially been observed in the fields of both tourism and education (Anwar et al., 2014; Vandeyar, 2015). Yet, the many uses and implications of ICT for tourism education in South Africa are unclear and under-theorised as a research area (Adukaite, Van Zyl, & Cantoni, 2016). Moreover, engagement has been identified as a significant indicator of student success in South Africa (Council for Higher Education, 2010). Lack of engagement contributes to poor graduation rates at secondary and tertiary institutions in South Africa (Strydom et al., 2010; Titus & Ng’ambi, 2014). A common strategy to address lack of student engagement is introducing game elements into the learning process: the so-called gamification of learning (Kapp, 2012). The majority of research in this field has been conducted in more economically advanced and developed regions, and there is a paucity of research in emerging country contexts. It is argued that gamification can be effectively utilised also in these contexts to address learner engagement and motivation. This study aims to contribute in this respect: firstly, by investigating the extent to which ICT supports tourism education in South African high schools through the lenses of Technology Domestication Theory (Habib, 2005; Haddon, 2006) and Social Cognitive Theory (Bandura, 1977). Secondly, the study aims to examine gamified learning acceptance within tourism education in a developing country context. The research assimilates three separate studies. Study 1. The Role of Digital Technology in Tourism Education: A Case Study of South African Secondary Schools The study was designed as an exploratory analysis, based on 24 in-depth interviews (n=24) with high school tourism teachers and government officials. An analysis reveals that teachers recognize ICT as essential in exposing students to the tourism industry. This is especially the case in under-resourced schools, where learners do not have the financial means to participate in tourism activities. However, ICT is still limited in its integration as a pedagogical support tool. The major obstacles toward integration include: technology anxiety, lack of training, availability of resources, and learner resistance to use their personal mobile devices. Study 2. Raising Awareness and Promoting Informal Learning on World Heritage in Southern Africa. The Case of WHACY, a Gamified ICT-enhanced Tool The goal of the study was to present the World Heritage Awareness Campaign for Youth (WHACY) in Southern Africa. A campaign was dedicated to raise awareness and foster informal learning among Southern African youth about the heritage and sustainable tourism. The campaign employed an online and offline gamified learning platform, which was supported by a dedicated website, Facebook page, wiki and offline materials. In one year of operation the campaign reached more than 100K audience. For the evaluation of the campaign, a mixed methods approach was used: focus groups with students (n=9), interviews (n=19) and a survey with teachers (n=209). The study attempted to assess user experience in terms of engagement and conduciveness to learning and explored the possibility of a gamified application to be integrated into the existing high school tourism curriculum. The perspectives of South African tourism students and teachers were here considered. Study 3. Teacher perceptions on the use of digital gamified learning in tourism education: The case of South African secondary schools. The study is quantitative in nature and investigated the behavioural intention of South African tourism teachers to integrate a gamified application within secondary tourism education. Data collected from 209 teachers were tested against the research model using a structural equation modelling approach. The study investigated the extent to which six determined predictors (perceptions about playfulness, curriculum relatedness, learning opportunities, challenge, self-efficacy and computer anxiety) influence the acceptance of a gamified application by South African tourism teachers. The study may prove useful to educators and practitioners in understanding which determinants may influence gamification introduction into formal secondary education

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Standardization of Protocol Design for User Training in EEG-based Brain-Computer Interface

    Get PDF
    International audienceBrain-computer interfaces (BCIs) are systems that enable a personto interact with a machine using only neural activity. Such interaction canbe non-intuitive for the user hence training methods are developed to increaseone’s understanding, confidence and motivation, which would in parallel increasesystem performance. To clearly address the current issues in the BCI usertraining protocol design, here it is divided intointroductoryperiod and BCIinteractionperiod. First, theintroductoryperiod (before BCI interaction) mustbe considered as equally important as the BCI interaction for user training. Tosupport this claim, a review of papers show that BCI performance can dependon the methodologies presented in such introductory period. To standardize itsdesign, the literature from human-computer interaction (HCI) is adjusted to theBCI context. Second, during the user-BCI interaction, the interface can takea large spectrum of forms (2D, 3D, size, color etc.) and modalities (visual,auditory or haptic etc.) without following any design standard or guidelines.Namely, studies that explore perceptual affordance on neural activity show thatmotor neurons can be triggered from a simple observation of certain objects, anddepending on objects’ properties (size, location etc.) neural reactions can varygreatly. Surprisingly, the effects of perceptual affordance were not investigatedin the BCI context. Both inconsistent introductions to BCI as well as variableinterface designs make it difficult to reproduce experiments, predict their outcomesand compare results between them. To address these issues, a protocol designstandardization for user training is proposed

    A Design Exploration of Affective Gaming

    Get PDF
    Physiological sensing has been a prominent fixture in games user research (GUR) since the late 1990s, when researchers began to explore its potential to enhance and understand experience within digital game play. Since these early days, it has been widely argued that “affective gaming”—in which gameplay is influenced by a player’s emotional state—can enhance player experience by integrating physiological sensors into play. In this thesis, I conduct a design exploration of the field of affective gaming by first, systematically exploring the field and creating a framework (the affective game loop) to classify existing literature; and second by presenting two design probes, to probe and explore the design space of affective games contextualized within the affective game loop: In the Same Boat and Commons Sense. The systematic review explored this unique design space of affective gaming, opening up future avenues for exploration. The affective game loop was created as a way to classify the physiological signals and sensors most commonly used in prior literature within the context of how they are mapped into the gameplay itself. Findings suggest that the physiological input mappings can be more action-based (e.g., affecting mechanics in the game such as the movement of the character) or more context-based (e.g., affecting things like environmental or difficulty variables in the game). Findings also suggested that while the field has been around for decades, there is still yet to be any commercial successes, so does physiological interaction really heighten player experience? This question instigated the design of the two probes, exploring ways to implement these mappings and effectively heighten player experience. In the Same Boat (Design Probe One) is an embodied mirroring game designed to promote an intimate interaction, using players’ breathing rate and facial expressions to control movement of a canoe down a river. Findings suggest that playing In the Same Boat fostered the development of affiliation between the players, and that while embodied controls were less intuitive, people enjoyed them more, indicating the potential of embodied controls to foster social closeness in synchronized play over a distance. Commons Sense (Design Probe Two) is a communication modality intended to heighten audience engagement and effectively capture and communicate the audience experience, using a webcam-based heart rate detection software that takes an average of each spectator’s heart rate as input to affect in-game variables such as lighting and sound design, and game difficulty. Findings suggest that Commons Sense successfully facilitated the communication of audience response in an online entertainment context—where these social cues and signals are inherently diminished. In addition, Commons Sense is a communication modality that can both enhance a play experience while offering a novel way to communicate. Overall, findings from this design exploration shows that affective games offer a novel way to deliver a rich gameplay experience for the player

    Améliorer les interactions homme-machine et la présence sociale avec l’informatique physiologique

    Get PDF
    This thesis explores how physiological computing can contribute to human-computer interaction (HCI) and foster new communication channels among the general public. We investigated how physiological sensors, such as electroencephalography (EEG), could be employed to assess the mental state of the users and how they relate to other evaluation methods. We created the first brain-computer interface that could sense visual comfort during the viewing of stereoscopic images and shaped a framework that could help to assess the over all user experience by monitoring workload, attention and error recognition.To lower the barrier between end users and physiological sensors,we participated in the software integration of a low-cost and open hardware EEG device; used off-the shelf webcams to measure heart rate remotely, crafted we arables that can quickly equip users so that electrocardiography, electrodermal activity or EEG may be measured during public exhibitions. We envisioned new usages for our sensors, that would increase social presence. In a study about human-agent interaction, participants tended to prefer virtual avatars that were mirroring their own internal state. A follow-up study focused on interactions between users to describe how physiological monitoringcould alter our relationships. Advances in HCI enabled us to seam lesslyintegrate biofeedback to the physical world. We developped Teegi, apuppet that lets novices discover by themselves about their brain activity. Finally, with Tobe, a toolkit that encompasses more sensors and give more freedom about their visualizations, we explored how such proxy shifts our representations, about our selves as well as about the others.Cette thèse explore comment l’informatique physiologique peut contribuer aux interactions homme-machine (IHM) et encourager l’apparition de nouveaux canaux de communication parmi le grand public. Nous avons examiné comment des capteurs physiologiques,tels que l’électroencéphalographie (EEG), pourraient être utilisés afin d’estimer l’état mental des utilisateurs et comment ils se positionnent par rapport à d’autres méthodes d’évaluation. Nous avons créé la première interface cerveau-ordinateur capable de discriminer le confort visuel pendant le visionnage d’images stéréoscopiques et nous avons esquissé un système qui peux aider à estimer l’expérience utilisateur dans son ensemble, en mesurant charge mentale, attention et reconnaissance d’erreur. Pour abaisser la barrière entre utilisateurs finaux et capteurs physiologiques, nous avons participé à l’intégration logicielle d’un appareil EEG bon marché et libre, nous avons utilisé des webcams du commerce pour mesurer le rythme cardiaque à distance, nous avons confectionné des wearables dont les utilisateurs peuvent rapidement s’équiper afin qu’électrocardiographie, activité électrodermale et EEG puissent être mesurées lors de manifestations publiques. Nous avons imaginé de nouveaux usages pour nos capteurs, qui augmenteraient la présence sociale. Dans une étude autour de l’interaction humain agent,les participants avaient tendance à préférer les avatars virtuels répliquant leurs propres états internes. Une étude ultérieure s’est concentrée sur l’interaction entre utilisateurs, profitant d’un jeu de plateau pour décrire comment l’examen de la physiologie pourrait changer nos rapports. Des avancées en IHM ont permis d’intégrer de manière transparente du biofeedback au monde physique. Nous avons développé Teegi, une poupée qui permet aux novices d’en découvrir plus sur leur activité cérébrale, par eux-mêmes. Enfin avec Tobe, un toolkit qui comprend plus de capteurs et donne plus de liberté quant à leurs visualisations, nous avons exploré comment un tel proxy décalenos représentations, tant de nous-mêmes que des autres

    The usage of fully immersive head-mounted displays in social everyday contexts

    Get PDF
    Technology often evolves from decades of research in university and industrial laboratories and changes people's lives when it becomes available to the masses. In the interaction between technology and consumer, established designs in the laboratory environment must be adapted to the needs of everyday life. This paper deals with the challenges arising from the development of fully immersive Head Mounted Displays (HMD) in laboratories towards their application in everyday contexts. Research on virtual reality (VR) technologies spans over 50 years and covers a wide field of topics, e.g., technology, system design, user interfaces, user experience or human perception. Other disciplines such as psychology or the teleoperation of robots are examples for users of VR technology. The work in the previous examples was mainly carried out in laboratories or highly specialized environments. The main goal was to generate systems that are ideal for a single user to conduct a particular task in VR. The new emerging environments for the use of HMDs range from private homes to offices to convention halls. Even in public spaces such as public transport, cafés or parks, immersive experiences are possible. However, current VR systems are not yet designed for these environments. Previous work on problems in the everyday environment deals with challenges such as preventing the user from colliding with a physical object. However, current research does not take into account the new social context for an HMD user associated with these environments. Several people who have different roles are around the user in these contexts. In contrast to laboratory scenarios, the non-HMD user, for example, does not share the task with or is aware of the state of the HMD user in VR. This thesis contributes to the challenges introduced by the social context. For this purpose I offer solutions to overcome the visual separation of the HMD user. I also suggest methods for investigating and evaluating the use of HMDs suitable for everyday context. First, we present concepts and insights to overcome the challenges arising from an HMD covering the user's face. In the private context, e.g., living rooms, one of the main challenges is the need for an HMD user to take off the HMD to be able to communicate with others. Reasons for taking off the HMD are the visual exclusion of the surrounding world for HMD users and the HMD covering the users' face, hindering communication. Additionally, the Non-HMD users do not know about the virtual world the HMD user is acting in. Previous work suggests to visualize the bystanding Non-HMD user or its actions in VR to address such challenges. The biggest advantage of a fully immersive experience, however, is the full separation from the physical surrounding with the ultimate goal of being at another place. Therefore I argue not to integrate a non-HMD users directly into VR. I introduce the approach of using a shared surface that provides a common basis for information and interaction between a non-HMD and a HMD user. Such a surface can be utilized by using a smartphone. The same information is presented to the HMD in VR and the Non-HMD user on the shared surface in the same physical position, enabling joint interaction at the surface. By examining four feedback modalities, we provide design guidelines for touch interaction. The guidelines support interaction design with such a shared surface by an HMD user. Further, we explore the possibility to inform the Non-HMD user about the user's state during a mixed presence collaboration, e.g., if the HMD user is inattentive to the real world. For this purpose I use a frontal display attached to the HMD. In particular we explore the challenges of disturbed socialness and reduced collaboration quality, by presenting the users state on the front facing display. In summary, our concepts and studies explore the application of a shared surface to overcome challenges in a co-located mixed presence collaboration. Second, we look at the challenges of using HMDs in a public environment that have not yet been considered. The use of HMDs in these environments is becoming a reality due to the current development of HMDs, which contain all necessary hardware in one portable device. Related work, in particular, the work on public displays, already addresses the interaction with technology in public environments. The form factor of the HMD, the need to take an HMD onto the head and especially the visual and mental exclusion of the HMD user are new and not yet understood challenges in these environments. We propose a problem space for semi-public (e.g., conference rooms) and public environments (e.g., market places). With an explorative field study, we gain insight into the effects of the visual and physical separation of an HMD user from surrounding Non-HMD users. Further, we present a method that helps to design and evaluate the unsupervised usage of HMDs in public environments, the \emph{audience funnel flow model for HMDs}. Third, we look into methods that are suitable to monitor and evaluate HMD-based experiences in the everyday context. One core measure is the experience of being present in the virtual world, i.e., the feeling of ``being there''. Consumer-grade HMDs are already able to create highly immersive experiences, leading to a strong presence experience in VR. Hence we argue it is important to find and understand the remaining disturbances during the experience. Existing methods from the laboratory context are either not precise enough, e.g, questionnaires, to find these disturbances or cause high effort in their application and evaluation, e.g., physiological measures. In a literature review, we show that current research heavily relies on questionnaire-based approaches. I improve current qualitative approaches -- interviews, questionnaires -- to make the temporal variation of a VR experience assessable. I propose a drawing method that recognizes breaks in the presence experience. Also, it helps the user in reflecting an HMD-based experience and supports the communication between an interviewer and the HMD user. In the same paper, we propose a descriptive model that allows the objective description of the temporal variations of a presence experience from beginning to end. Further, I present and explore the concept of using electroencephalography to detect an HMD user's visual stress objectively. Objective detection supports the usage of HMDs in private and industrial contexts, as it ensures the health of the user. With my work, I would like to draw attention to the new challenges when using virtual reality technologies in everyday life. I hope that my concepts, methods and evaluation tools will serve research and development on the usage of HMDs. In particular, I would like to promote the use in the everyday social context and thereby create an enriching experience for all.Technologie entwickelt sich oft aus jahrzehntelanger Forschung in Universitäts- und Industrielabors und verändert das Leben der Menschen, wenn sie für die Masse verfügbar wird. Im Zusammenspiel von Technik und Konsument müssen im Laborumfeld etablierte Designs an die Bedürfnisse des Alltags angepasst werden. Diese Arbeit beschäftigt sich mit den Herausforderungen, die sich aus der Entwicklung voll immersiver Head Mounted Displays (HMD) in Labors, hin zu ihrer Anwendung im täglichen Kontext ergeben. Die Forschung zu Virtual-Reality-Technologien erstreckt sich über mehr als 50 Jahre und deckt ein breites Themenspektrum ab, wie zum Beispiel Technologie, Systemdesign, Benutzeroberflächen, Benutzererfahrung oder menschliche Wahrnehmung. Andere Disziplinen wie die Psychologie oder die Teleoperation von Robotern sind Beispiele für Anwender von VR Technologie. in der Vergangenheit Arbeiten wurden Arbeiten mit VR Systemen überwiegend in Labors oder hochspezialisierten Umgebungen durchgeführt. Der Großteil dieser Arbeiten zielte darauf ab, Systeme zu generieren, die für einen einzigen Benutzer ideal sind, um eine bestimmte Aufgabe in VR durchzuführen. Die neu aufkommenden Umgebungen für den Einsatz von HMDs reichen vom privaten Haushalt über Büros bis hin zu Kongresssälen. Auch in öffentlichen Räumen wie öffentlichen Verkehrsmitteln, Cafés oder Parks sind immersive Erlebnisse möglich. Allerdings sind die aktuellen VR Systeme noch nicht für diese Umgebungen ausgelegt. Vorangegangene Arbeiten zu den Problemen im Alltags Umfeld befassen sich daher mit Herausforderungen, wie der Vermeidung von Kollisionen des Benutzers mit einem physischen Objekt. Die aktuelle Forschung berücksichtigt allerdings nicht den neuen sozialen Kontext für einen HMD-Anwender, der mit den Alltagsumgebungen verbunden ist. Mehrere Personen, die unterschiedliche Rollen haben, sind in diesen Kontexten um den Benutzer herum. Im Gegensatz zu Szenarien im Labor teilt der Nicht-HMD-Benutzer beispielsweise nicht die Aufgabe und ist sich nicht über den Zustand des HMD-Benutzers in VR bewusst. Diese Arbeit trägt zu den Herausforderungen bei, die durch den sozialen Kontext eingeführt werden. Zu diesem Zweck bieten ich in meiner Arbeit Lösungen an, um die visuelle Abgrenzung des HMD-Anwenders zu überwinden. Ich schlage zudem Methoden zur Untersuchung und Bewertung des Einsatzes von HMDs in öffentlichen Bereichen vor. Zuerst präsentieren wir Konzepte und Erkenntnisse, um die Herausforderungen zu meistern, die sich durch das HMD ergeben, welches das Gesicht des Benutzers abdeckt. Im privaten Bereich, z.B. in Wohnzimmern, ist eine der größten Herausforderungen die Notwendigkeit, dass der HMD-Nutzer das HMD abnimmt, um mit anderen kommunizieren zu können. Gründe für das Abnehmen des HMDs sind die visuelle Ausgrenzung der Umgebung für die HMD-Anwender und das HMD selbst, welches das Gesicht des Anwenders bedeckt und die Kommunikation behindert. Darüber hinaus wissen die Nicht-HMD-Benutzer nichts über die virtuelle Welt, in der der HMD-Benutzer handelt. Bisherige Konzepte schlugen vor, den Nicht-HMD-Benutzer oder seine Aktionen in VR zu visualisieren, um diese Herausforderungen zu adressieren. Der größte Vorteil einer völlig immersiven Erfahrung ist jedoch die vollständige Trennung der physischen Umgebung mit dem ultimativen Ziel, an einem anderen Ort zu sein. Daher schlage ich vor die Nicht-HMD-Anwender nicht direkt in VR einzubinden. Stattdessen stelle ich den Ansatz der Verwendung einer geteilten Oberfläche vor, die eine gemeinsame Grundlage für Informationen und Interaktion zwischen einem Nicht-HMD und einem HMD-Benutzer bietet. Eine geteile Oberfläche kann etwa durch die Verwendung eines Smartphones realisiert werden. Eine solche Oberfläche präsentiert dem HMD und dem Nicht-HMD-Benutzer an der gleichen physikalischen Position die gleichen Informationen. Durch die Untersuchung von vier Feedbackmodalitäten stellen wir Designrichtlinien zur Touch-Interaktion zur Verfügung. Die Richtlinien ermöglichen die Interaktion mit einer solchen geteilten Oberfläche durch einen HMD-Anwender ermöglichen. Weiterhin untersuchen wir die Möglichkeit, den Nicht-HMD-Benutzer während einer Zusammenarbeit über den Zustand des HMD Benutzers zu informieren, z.B., wenn der HMD Nutzer gegenüber der realen Welt unachtsam ist. Zu diesem Zweck schlage ich die Verwendung eines frontseitigen Displays, das an dem HMD angebracht ist. Zusätzlich bieten unsere Studien Einblicke, die den Designprozess für eine lokale, gemischt präsente Zusammenarbeit unterstützen. Zweitens betrachten wir die bisher unberücksichtigten Herausforderungen beim Einsatz von HMDs im öffentlichen Umfeld. Ein Nutzung von HMDs in diesen Umgebungen wird durch die aktuelle Entwicklung von HMDs, die alle notwendige Hardware in einem tragbaren Gerät enthalten, zur Realität. Verwandte Arbeiten, insbesondere aus der Forschung an Public Displays, befassen sich bereits mit der Nutzung von Display basierter Technologien im öffentlichen Kontext. Der Formfaktor des HMDs, die Notwendigkeit ein HMD auf den Kopf zu Ziehen und vor allem die visuelle und mentale Ausgrenzung des HMD-Anwenders sind neue und noch nicht verstanden Herausforderung in diesen Umgebungen. Ich schlage einen Design Space für halböffentliche (z.B. Konferenzräume) und öffentliche Bereiche (z.B. Marktplätze) vor. Mit einer explorativen Feldstudie gewinnen wir Einblicke in die Auswirkungen der visuellen und physischen Trennung eines HMD-Anwenders von umliegenden Nicht-HMD-Anwendern. Weiterhin stellen wir eine Methode vor, die unterstützt, den unbeaufsichtigten Einsatz von HMDs in öffentlichen Umgebungen zu entwerfen und zu bewerten, das \emph{audience funnel flow model for HMDs}. Drittens untersuchen wir Methoden, die geeignet sind, HMD-basierte Erfahrungen im Alltagskontext zu überwachen und zu bewerten. Eine zentrale Messgröße ist die Erfahrung der Präsenz in der virtuellen Welt, d.h. das Gefühl des "dort seins". HMDs für Verbraucher sind bereits in der Lage, hoch immersive Erlebnisse zu schaffen, was zu einer starken Präsenzerfahrung im VR führt. Daher argumentieren wir, dass es wichtig ist, die verbleibenden Störungen während der Erfahrung zu finden und zu verstehen. Bestehende Methoden aus dem Laborkontext sind entweder nicht präzise genug, z.B. Fragebögen, um diese Störungen zu finden oder verursachen einen hohen Aufwand in ihrer Anwendung und Auswertung, z.B. physilogische Messungen. In einer Literaturübersicht zeigen wir, dass die aktuelle Forschung stark auf fragebogenbasierte Ansätze angewiesen ist. Ich verbessern aktuelle qualitative Ansätze -- Interviews, Fragebögen -- um die zeitliche Variation einer VR-Erfahrung bewertbar zu machen. Ich schlagen eine Zeichnungsmethode vor die Brüche in der Präsenzerfahrung erkennt, den Benutzer bei der Reflexion einer HMD-basierten Erfahrung hilft und die Kommunikation zwischen einem Interviewer und dem HMD-Benutzer unterstützt. In der gleichen Veröffentlichung schlage ich ein Modell vor, das die objektive Beschreibung der zeitlichen Variationen einer Präsenzerfahrung von Anfang bis Ende ermöglicht. Weiterhin präsentieren und erforschen ich das Konzept der Elektroenzephalographie, um den visuellen Stress eines HMD-Anwenders objektiv zu erfassen. Die objektive Erkennung unterstützt den Einsatz von HMDs im privaten und industriellen Kontext, da sie die Gesundheit des Benutzers sicherstellt. Mit meiner Arbeit möchte ich auf die neuen Herausforderungen beim Einsatz von VR-Technologien im Alltag aufmerksam machen. Ich hoffe, dass meine Konzepte, Methoden und Evaluierungswerkzeuge der Forschung und Entwicklung über den Einsatz von HMDs dienen werden. Insbesondere möchte ich den Einsatz im alltäglichen sozialen Kontext fördern und damit eine bereichernde Erfahrung für alle schaffen

    Leveraging EEG-based speech imagery brain-computer interfaces

    Get PDF
    Speech Imagery Brain-Computer Interfaces (BCIs) provide an intuitive and flexible way of interaction via brain activity recorded during imagined speech. Imagined speech can be decoded in form of syllables or words and captured even with non-invasive measurement methods as for example the Electroencephalography (EEG). Over the last decade, research in this field has made tremendous progress and prototypical implementations of EEG-based Speech Imagery BCIs are numerous. However, most work is still conducted in controlled laboratory environments with offline classification and does not find its way to real online scenarios. Within this thesis we identify three main reasons for these circumstances, namely, the mentally and physically exhausting training procedures, insufficient classification accuracies and cumbersome EEG setups with usually high-resolution headsets. We furthermore elaborate on possible solutions to overcome the aforementioned problems and present and evaluate new methods in each of the domains. In detail we introduce two new training concepts for imagined speech BCIs, one based on EEG activity during silently reading and the other recorded during overtly speaking certain words. Insufficient classification accuracies are addressed by introducing the concept of a Semantic Speech Imagery BCI, which classifies the semantic category of an imagined word prior to the word itself to increase the performance of the system. Finally, we investigate on different techniques for electrode reduction in Speech Imagery BCIs and aim at finding a suitable subset of electrodes for EEG-based imagined speech detection, therefore facilitating the cumbersome setups. All of our presented results together with general remarks on experiences and best practice for study setups concerning imagined speech are summarized and supposed to act as guidelines for further research in the field, thereby leveraging Speech Imagery BCIs towards real-world application.Speech Imagery Brain-Computer Interfaces (BCIs) bieten eine intuitive und flexible Möglichkeit der Interaktion mittels Gehirnaktivität, aufgezeichnet während der bloßen Vorstellung von Sprache. Vorgestellte Sprache kann in Form von Silben oder Wörtern auch mit nicht-invasiven Messmethoden wie der Elektroenzephalographie (EEG) gemessen und entschlüsselt werden. In den letzten zehn Jahren hat die Forschung auf diesem Gebiet enorme Fortschritte gemacht, und es gibt zahlreiche prototypische Implementierungen von EEG-basierten Speech Imagery BCIs. Die meisten Arbeiten werden jedoch immer noch in kontrollierten Laborumgebungen mit Offline-Klassifizierung durchgeführt und finden nicht denWeg in reale Online-Szenarien. In dieser Arbeit identifizieren wir drei Hauptgründe für diesen Umstand, nämlich die geistig und körperlich anstrengenden Trainingsverfahren, unzureichende Klassifizierungsgenauigkeiten und umständliche EEG-Setups mit meist hochauflösenden Headsets. Darüber hinaus erarbeiten wir mögliche Lösungen zur Überwindung der oben genannten Probleme und präsentieren und evaluieren neue Methoden für jeden dieser Bereiche. Im Einzelnen stellen wir zwei neue Trainingskonzepte für Speech Imagery BCIs vor, von denen eines auf der Messung von EEG-Aktivität während des stillen Lesens und das andere auf der Aktivität während des Aussprechens bestimmter Wörter basiert. Unzureichende Klassifizierungsgenauigkeiten werden durch die Einführung des Konzepts eines Semantic Speech Imagery BCI angegangen, das die semantische Kategorie eines vorgestellten Wortes vor dem Wort selbst klassifiziert, um die Performance des Systems zu erhöhen. Schließlich untersuchen wir verschiedene Techniken zur Elektrodenreduktion bei Speech Imagery BCIs und zielen darauf ab, eine geeignete Teilmenge von Elektroden für die EEG-basierte Erkennung von vorgestellter Sprache zu finden, um so die umständlichen Setups zu erleichtern. Alle unsere Ergebnisse werden zusammen mit allgemeinen Bemerkungen zu Erfahrungen und Best Practices für Studien-Setups bezüglich vorgestellter Sprache zusammengefasst und sollen als Richtlinien für die weitere Forschung auf diesem Gebiet dienen, um so Speech Imagery BCIs für die Anwendung in der realenWelt zu optimieren
    corecore