228 research outputs found

    Low-Cost Assessment of User eXperience Through EEG Signals

    Get PDF
    EEG signals are an important tool for monitoring the brain activity of a person, but equipment, expertise and infrastructure are required. EEG technologies are generally expensive, thus few people are normally able to use them. However, some low-cost technologies are now available. One of these is OPENBCI, but it seems that it is yet to be widely employed in Human-Computer Interaction. In this study, we used OPENBCI technology to capture EEG signals linked to brain activity in ten subjects as they interacted with two video games: Candy Crush and Geometry Dash. The experiment aimed to capture the signals while the players interacted with the video games in several situations. The results show differences due to the absence/presence of sound; players appear to be more relaxed without sound. In addition, consistent analysis of the EEG data, meCue 2.0 and SAM data showed high consistency. The evidence demonstrates that interesting results are able to be gathered based on low-cost EEG (standard) signal-based technologies

    Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review

    Get PDF
    The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT, the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.This work was supported in part by the FEDER/Ministerio de Ciencia, Innovación y Universidades;Agencia Estatal de Investigación, through the Smartlet Project under Grant TIN2017-85179-C3-1-R, and in part by the Madrid Regional Government through the e-Madrid-CM Project under Grant S2018/TCS-4307, a project which is co-funded by the European Structural Funds (FSE and FEDER). Partial support has also been received from the European Commission through Erasmus + Capacity Building in the Field of Higher Education projects, more specifically through projects LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), InnovaT (598758-EPP-1-2018-1-AT-EPPKA2-CBHE-JP), and PROF-XXI (609767-EPP-1-2019-1-ES-EPPKA2-CBHE-JP)

    Effective Vocabulary Learning in Multimedia CALL Environments: Psychological Evidence

    Get PDF
    A wide range of technologies are now applied in the field of second language (L2) vocabulary acquisition. Nevertheless, intentional language-focused vocabulary CALL software has not been proven to effectively operationalise working memory. The research presented in this thesis contributes to the existing literature by identifying coding features from cutting-edge multimedia technologies that relate to L2 learning and memory research. The study participants were fifty undergraduate students from the University of York, UK. Their individual differences and memory abilities were assessed using the Automated Working Memory Assessment (AWMA). Initially, the participants were exposed to L2 novel words via the Computer-Assisted Vocabulary Acquisition software (CAVA) via three interactive interfaces: a verbal-based menu driven interface (L2-L1: MDI), a visual-based graphical user interface (L2-Picture: GUI) and a visuospatial-based zoomable user interface (L2-Context: ZUI), and immediate and delayed post-tests conducted. The first study results revealed that ZUI correlated significantly with AWMA, tending to be the most effective multimedia learning method in the immediate post-test, compared with GUI and MDI. However, in the delayed post-test, ZUI’s effect experienced a dramatic decline, while GUI tended to be the most effective. In the second study, the participants were exposed to a second version of CAVA. Their accuracy and response times during the translation recognition task were measured and analysed, as were their pupillary responses. The findings revealed the participants were significantly more accurate and faster when judging the No translation pairs than the Yes ones. Of the multimedia representations, responses to MDI words were achieved significantly faster and more accurately than to GUI and ZUI words. Moreover, those participants with high verbal short-term memories were significantly faster and more accurate, experiencing a relatively reduced pupil size

    Haptic-Enhanced Learning in Preclinical Operative Dentistry

    Get PDF
    Background: Virtual reality haptic simulators represent a new paradigm in dental education that may potentially impact the rate and efficiency of basic skill acquisition, as well as pedagogically influence the various aspects of students’ preclinical experience. However, the evidence to support their efficiency and inform their implementation is still limited. Objectives: This thesis set out to empirically examine how haptic VR simulator (Simodont®) can enhance the preclinical dental education experience particularly in the context of operative dentistry. We specify 4 distinct research themes to explore, namely: simulator validity (face, content and predictive), human factors in 3D stereoscopic display, motor skill acquisition, and curriculum integration. Methods: Chapter 3 explores the face and content validity of Simodont® haptic dental simulator among a group of postgraduate dental students. Chapter 4 examines the predictive utility of Simodont® in predicting subsequent preclinical and clinical performance. The results indicate the potential utility of the simulator in predicting future clinical dental performance among undergraduate students. Chapter 5 investigates the role of stereopsis in dentistry from two different perspectives via two studies. Chapter 6 explores the effect of qualitatively different types of pedagogical feedback on the training, transfer and retention of basic manual dexterity dental skills. The results indicate that the acquisition and retention of basic dental motor skills in novice trainees is best optimised through a combination of instructor and visualdisplay VR-driven feedback. A pedagogical model for integration of haptic dental simulator into the dental curriculum has been proposed in Chapter 7. Conclusion: The findings from this thesis provide new insights into the utility of the haptic virtual reality simulator in undergraduate preclinical dental education. Haptic simulators have promising potential as a pedagogical tool in undergraduate dentistry that complements the existing simulation methods. Integration of haptic VR simulators into the dental curriculum has to be informed by sound pedagogical principles and mapped into specific learning objectives

    A Neurophysiologic Study Of Visual Fatigue In Stereoscopic Related Displays

    Get PDF
    Two tasks were investigated in this study. The first study investigated the effects of alignment display errors on visual fatigue. The experiment revealed the following conclusive results: First, EEG data suggested the possibility of cognitively-induced time compensation changes due to a corresponding effect in real-time brain activity by the eyes trying to compensate for the alignment. The magnification difference error showed more significant effects on all EEG band waves, which were indications of likely visual fatigue as shown by the prevalence of simulator sickness questionnaire (SSQ) increases across all task levels. Vertical shift errors were observed to be prevalent in theta and beta bands of EEG which probably induced alertness (in theta band) as a result of possible stress. Rotation errors were significant in the gamma band, implying the likelihood of cognitive decline because of theta band influence. Second, the hemodynamic responses revealed that significant differences exist between the left and right dorsolateral prefrontal due to alignment errors. There was also a significant difference between the main effect for power band hemisphere and the ATC task sessions. The analyses revealed that there were significant differences between the dorsal frontal lobes in task processing and interaction effects between the processing lobes and tasks processing. The second study investigated the effects of cognitive response variables on visual fatigue. Third, the physiologic indicator of pupil dilation was 0.95mm that occurred at a mean time of 38.1min, after which the pupil dilation begins to decrease. After the average saccade rest time of 33.71min, saccade speeds leaned toward a decrease as a possible result of fatigue on-set. Fourth, the neural network classifier showed visual response data from eye movement were identified as the best predictor of visual fatigue with a classification accuracy of 90.42%. Experimental data confirmed that 11.43% of the participants actually experienced visual fatigue symptoms after the prolonged task

    The medical pause in simulation training

    Get PDF
    The medical pause, stopping current performance for a short time for additional cognitive activities, can potentially advance patient safety and learning in medicine. Yet, to date, we do not have a theoretical understanding of why pausing skills should be taught as a professional skill, nor empirical evidence of how pausing affects performance and learning. To address this gap, this thesis investigates the effects of pausing in medical training theoretically and empirically. For the empirical investigation, a computer-based simulation was used for the task environment, and eye-tracking and log data to assess performance

    Identifying Relationships between Physiological Measures and Evaluation Metrics for 3D Interaction Techniques

    Get PDF
    Abstract. This project aims to present a methodology to study the relationships between physiological measures and evaluation metrics for 3D interaction techniques using methods for multivariate data analysis. Physiological responses, such as heart rate and skin conductance, offer objective data about the user stress during interaction. This could be useful, for instance, to evaluate qualitative aspects of interaction techniques without relying on solely subjective data. Moreover, these data could contribute to improve task performance analysis by measuring different responses to 3D interaction techniques. With this in mind, we propose a methodology that defines a testing protocol, a normalization procedure and statistical techniques, considering the use of physiological measures during the evaluation process. A case study comparison between two 3D interaction techniques (ray-casting and HOMER) shows promising results, pointing to heart rate variability, as measured by the NN50 parameter, as a potential index of task performance. Further studies are needed in order to establish guidelines for evaluation processes based on welldefined associations between human behaviors and human actions realized in 3D user interfaces

    Relevant abuse? Investigating the effects of an abusive subtitling procedure on the perception of TV anime using eye tracker and questionnaire

    Get PDF
    The storage capacity of DVD means multiple subtitle streams can be included on one disc. This has allowed some producers to include subtitle streams with experimental procedures that we will term as "abusive" subtitles (Nornes 1999). Abusive subtitles break subtitling norms in an attempt to be more faithful to the source text and increase the translator's visibility. This thesis focuses on one such abusive procedure, namely the pop-up gloss. It refers to pop-up notes that explain culturally marked items appearing in each of the semiotic channels. Already popular with amateur subtitlers of anime (Japanese animation), pop-up gloss has come to percolate into commercially released anime DVD. This thesis investigates the question as to what effect the use of pop-up gloss has on viewer perception of TV anime in terms of positive cognitive effects (PCEs) and processing effort. A second question seeks to ask the validity of pupillometric measurements for measuring the processing effort experienced while viewing subtitled AV content. A novel methodology is applied where PCEs are measured using traditional questionnaire data, while processing effort is measured using a combination of questionnaire-based data, and fixation-based and pupillometric data gathered with an eye tracker. A study with 20 subjects indicates that the use of pop-up gloss does increase the PCEs experienced by subjects regarding items the pop-up gloss describes, while more processing effort is required by viewers when pop-up gloss is used. The analysis of pupillometric data suggests that they are suitable for measuring processing effort during the viewing of subtitled AV content

    Automotive user interfaces for the support of non-driving-related activities

    Get PDF
    Driving a car has changed a lot since the first car was invented. Today, drivers do not only maneuver the car to their destination but also perform a multitude of additional activities in the car. This includes for instance activities related to assistive functions that are meant to increase driving safety and reduce the driver’s workload. However, since drivers spend a considerable amount of time in the car, they often want to perform non-driving-related activities as well. In particular, these activities are related to entertainment, communication, and productivity. The driver’s need for such activities has vastly increased, particularly due to the success of smart phones and other mobile devices. As long as the driver is in charge of performing the actual driving task, such activities can distract the driver and may result in severe accidents. Due to these special requirements of the driving environment, the driver ideally performs such activities by using appropriately designed in-vehicle systems. The challenge for such systems is to enable flexible and easily usable non-driving-related activities while maintaining and increasing driving safety at the same time. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform non-driving-related activities besides the regular driving tasks. Using empirical methods that are commonly used in human-computer interaction, we investigate various aspects of automotive user interfaces with the goal to support the design and development of future interfaces that facilitate non-driving-related activities. The first aspect is related to using physiological data in order to infer information about the driver’s workload. As a second aspect, we propose a multimodal interaction style to facilitate the interaction with multiple activities in the car. In addition, we introduce two concepts for the support of commonly used and demanded non-driving-related activities: For communication with the outside world, we investigate the driver’s needs with regard to sharing ride details with remote persons in order to increase driving safety. Finally, we present a concept of time-adjusted activities (e.g., entertainment and productivity) which enable the driver to make use of times where only little attention is required. Starting with manual, non-automated driving, we also consider the rise of automated driving modes.When cars were invented, they allowed the driver and potential passengers to get to a distant location. The only activities the driver was able and supposed to perform were related to maneuvering the vehicle, i.e., accelerate, decelerate, and steer the car. Today drivers perform many activities that go beyond these driving tasks. This includes for example activities related to driving assistance, location-based information and navigation, entertainment, communication, and productivity. To perform these activities, drivers use functions that are provided by in-vehicle information systems in the car. Many of these functions are meant to increase driving safety or to make the ride more enjoyable. The latter is important since people spend a considerable amount of time in their cars and want to perform similar activities like those to which they are accustomed to from using mobile devices. However, as long as the driver is responsible for driving, these activities can be distracting and pose driver, passengers, and the environment at risk. One goal for the development of automotive user interfaces is therefore to enable an easy and appropriate operation of in-vehicle systems such that driving tasks and non-driving-related activities can be performed easily and safely. The main contribution of this thesis is a set of guidelines and exemplary concepts for automotive user interfaces that offer safe, diverse, and easy-to-use means to perform also non-driving-related activities while driving. Using empirical methods that are commonly used in human-computer interaction, we approach various aspects of automotive user interfaces in order to support the design and development of future interfaces that also enable non-driving-related activities. Starting with manual, non-automated driving, we also consider the transition towards automated driving modes. As a first part, we look at the prerequisites that enable non-driving-related activities in the car. We propose guidelines for the design and development of automotive user interfaces that also support non-driving-related activities. This includes for instance rules on how to adapt or interrupt activities when the level of automation changes. To enable activities in the car, we propose a novel interaction concept that facilitates multimodal interaction in the car by combining speech interaction and touch gestures. Moreover, we reveal aspects on how to infer information about the driver's state (especially mental workload) by using physiological data. We conducted a real-world driving study to extract a data set with physiological and context data. This can help to better understand the driver state, to adapt interfaces to the driver and driving situations, and to adapt the route selection process. Second, we propose two concepts for supporting non-driving-related activities that are frequently used and demanded in the car. For telecommunication, we propose a concept to increase driving safety when communicating with the outside world. This concept enables the driver to share different types of information with remote parties. Thereby, the driver can choose between different levels of details ranging from abstract information such as ``Alice is driving right now'' up to sharing a video of the driving scene. We investigated the drivers' needs on the go and derived guidelines for the design of communication-related functions in the car through an online survey and in-depth interviews. As a second aspect, we present an approach to offer time-adjusted entertainment and productivity tasks to the driver. The idea is to allow time-adjusted tasks during periods where the demand for the driver's attention is low, for instance at traffic lights or during a highly automated ride. Findings from a web survey and a case study demonstrate the feasibility of this approach. With the findings of this thesis we envision to provide a basis for future research and development in the domain of automotive user interfaces and non-driving-related activities in the transition from manual driving to highly and fully automated driving.Als das Auto erfunden wurde, ermöglichte es den Insassen hauptsächlich, entfernte Orte zu erreichen. Die einzigen Tätigkeiten, die Fahrerinnen und Fahrer während der Fahrt erledigen konnten und sollten, bezogen sich auf die Steuerung des Fahrzeugs. Heute erledigen die Fahrerinnen und Fahrer diverse Tätigkeiten, die über die ursprünglichen Aufgaben hinausgehen und sich nicht unbedingt auf die eigentliche Fahraufgabe beziehen. Dies umfasst unter anderem die Bereiche Fahrerassistenz, standortbezogene Informationen und Navigation, Unterhaltung, Kommunikation und Produktivität. Informationssysteme im Fahrzeug stellen den Fahrerinnen und Fahrern Funktionen bereit, um diese Aufgaben auch während der Fahrt zu erledigen. Viele dieser Funktionen verbessern die Fahrsicherheit oder dienen dazu, die Fahrt angenehm zu gestalten. Letzteres wird immer wichtiger, da man inzwischen eine beträchtliche Zeit im Auto verbringt und dabei nicht mehr auf die Aktivitäten und Funktionen verzichten möchte, die man beispielsweise durch die Benutzung von Smartphone und Tablet gewöhnt ist. Solange der Fahrer selbst fahren muss, können solche Aktivitäten von der Fahrtätigkeit ablenken und eine Gefährdung für die Insassen oder die Umgebung darstellen. Ein Ziel bei der Entwicklung automobiler Benutzungsschnittstellen ist daher eine einfache, adäquate Bedienung solcher Systeme, damit Fahraufgabe und Nebentätigkeiten gut und vor allem sicher durchgeführt werden können. Der Hauptbeitrag dieser Arbeit umfasst einen Leitfaden und beispielhafte Konzepte für automobile Benutzungsschnittstellen, die eine sichere, abwechslungsreiche und einfache Durchführung von Tätigkeiten jenseits der eigentlichen Fahraufgabe ermöglichen. Basierend auf empirischen Methoden der Mensch-Computer-Interaktion stellen wir verschiedene Lösungen vor, die die Entwicklung und Gestaltung solcher Benutzungsschnittstellen unterstützen. Ausgehend von der heute üblichen nicht automatisierten Fahrt betrachten wir dabei auch Aspekte des automatisierten Fahrens. Zunächst betrachten wir die notwendigen Voraussetzungen, um Tätigkeiten jenseits der Fahraufgabe zu ermöglichen. Wir stellen dazu einen Leitfaden vor, der die Gestaltung und Entwicklung von automobilen Benutzungsschnittstellen unterstützt, die das Durchführen von Nebenaufgaben erlauben. Dies umfasst zum Beispiel Hinweise, wie Aktivitäten angepasst oder unterbrochen werden können, wenn sich der Automatisierungsgrad während der Fahrt ändert. Um Aktivitäten im Auto zu unterstützen, stellen wir ein neuartiges Interaktionskonzept vor, das eine multimodale Interaktion im Fahrzeug mit Sprachbefehlen und Touch-Gesten ermöglicht. Für automatisierte Fahrzeugsysteme und zur Anpassung der Interaktionsmöglichkeiten an die Fahrsituation stellt der Fahrerzustand (insbesondere die mentale Belastung) eine wichtige Information dar. Durch eine Fahrstudie im realen Straßenverkehr haben wir einen Datensatz generiert, der physiologische Daten und Kontextinformationen umfasst und damit Rückschlüsse auf den Fahrerzustand ermöglicht. Mit diesen Informationen über Fahrerinnen und Fahrer wird es möglich, den Fahrerzustand besser zu verstehen, Benutzungsschnittstellen an die aktuelle Fahrsituation anzupassen und die Routenwahl anzupassen. Außerdem stellen wir zwei konkrete Konzepte zur Unterstützung von Nebentätigkeiten vor, die schon heute regelmäßig bei der Fahrt getätigt oder verlangt werden. Im Bereich der Telekommunikation stellen wir dazu ein Konzept vor, das die Fahrsicherheit beim Kommunizieren mit Personen außerhalb des Autos erhöht. Das Konzept erlaubt es dem Fahrer, unterschiedliche Arten von Kontextinformationen mit Kommunikationspartnern zu teilen. Dies reicht von der abstrakten Information, dass man derzeit im Auto unterwegs ist bis hin zum Teilen eines Live-Videos der aktuellen Fahrsituation. Diesbezüglich haben wir über eine Web-Umfrage und detaillierte Interviews die Bedürfnisse der Nutzer(innen) erhoben und ausgewertet. Zudem stellen wir ein prototypisches Konzept sowie Richtlinien vor, wie künftige Kommunikationsaufgaben im Fahrzeug gestaltet werden sollen. Als ein zweites Konzept betrachten wir zeitbeschränkte Aufgaben zur Unterhaltung und Produktivität im Fahrzeug. Die Idee ist hier, zeitlich begrenzte Aufgaben in Zeiten niedriger Belastung zuzulassen, wie zum Beispiel beim Warten an einer Ampel oder während einer hochautomatisierten (Teil-) Fahrt. Ergebnisse aus einer Web-Umfrage und einer Fallstudie zeigen die Machbarkeit dieses Ansatzes auf. Mit den Ergebnissen dieser Arbeit soll eine Basis für künftige Forschung und Entwicklung gelegt werden, um im Bereich automobiler Benutzungsschnittstellen insbesondere nicht-fahr-bezogene Aufgaben im Übergang zwischen manuellem Fahren und einer hochautomatisierten Autofahrt zu unterstützen
    corecore