487 research outputs found

    Digital ink and differentiated subjective ratings for cognitive load measurement in middle childhood

    Get PDF
    Background: New methods are constantly being developed to adapt cognitive load measurement to different contexts. However, research on middle childhood students' cognitive load measurement is rare. Research indicates that the three cognitive load dimensions (intrinsic, extraneous, and germane) can be measured well in adults and teenagers using differentiated subjective rating instruments. Moreover, digital ink recorded by smartpens could serve as an indicator for cognitive load in adults. Aims: With the present research, we aimed at investigating the relation between subjective cognitive load ratings, velocity and pressure measures recorded with a smartpen, and performance in standardized sketching tasks in middle childhood students. Sample: Thirty-six children (age 7–12) participated at the university's laboratory. Methods: The children performed two standardized sketching tasks, each in two versions. The induced intrinsic cognitive load or the extraneous cognitive load was varied between the versions. Digital ink was recorded while the children drew with a smartpen on real paper and after each task, they were asked to report their perceived intrinsic and extraneous cognitive load using a newly developed 5-item scale. Results: Results indicated that cognitive load ratings as well as velocity and pressure measures were substantially related to the induced cognitive load and to performance in both sketching tasks. However, cognitive load ratings and smartpen measures were not substantially related. Conclusions: Both subjective rating and digital ink hold potential for cognitive load and performance measurement. However, it is questionable whether they measure the exact same constructs

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden QualitĂ€t maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primĂ€r Text zu generieren, mĂŒssen Übersetzer nun Fehler in ansonsten hilfreichen ÜbersetzungsvorschlĂ€gen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mĂŒhsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere InteraktionsmodalitĂ€ten als Maus und Tastatur das PE unterstĂŒtzen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder ĂŒber Handgesten zu verĂ€ndern, Wörter per Drag & Drop neu anzuordnen oder all diese EingabemodalitĂ€ten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschĂ€tzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen fĂŒr das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine DomĂ€ne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    Handwriting Differences in Individuals with Presence and Absence of Antisocial Behaviour

    Get PDF
    Every literate human have their own distinctive handwriting characteristics that is of course embedded. Handwriting is that the photograph of the inner conflicts going on. It might be used as a projective test to review part that the person resist or generally is unaware of to share. This study makes an attempt by finding out that whether or not is it attainable to spot the criminal behaviour by a person' handwriting, as a result of delinquent behaviors are extremely current among kids and adolescents yet as adults. once these behaviors reach clinical significance they place a high burden on the individual, his or her immediate encompassing and society in general, higher insight into the correlates of delinquent behavior is required so as to develop adequate bar and intervention methods matched to a private’s personal risk to have interaction in antisocial behavior and skill associated risk factors. During this study 25 handwriting samples (22 males and three females) of individuals with high antisocial behaviour were analysed and compared with those of people with low antisocial behaviour and the study concludes that from a graphologist’s purpose of view, the writing of an individual with high antisocial behaviour will typically be delineate as that like a brick in the wall, rather trite, with a very little rhythm, inflexibility, dull and plentiful in abnormalities

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactĂșan con los ordenadores, hay mucha informaciĂłn que no se proporciona a propĂłsito. Mediante el estudio de estas interacciones implĂ­citas es posible entender quĂ© caracterĂ­sticas de la interfaz de usuario son beneficiosas (o no), derivando asĂ­ en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implĂ­citos del usuario en aplicaciones informĂĄticas es que cualquier interacciĂłn con el sistema puede contribuir a mejorar su utilidad. AdemĂĄs, dichos datos eliminan el coste de tener que interrumpir al usuario para que envĂ­e informaciĂłn explĂ­citamente sobre un tema que en principio no tiene por quĂ© guardar relaciĂłn con la intenciĂłn de utilizar el sistema. Por el contrario, en ocasiones las interacciones implĂ­citas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atenciĂłn a la manera de gestionar esta fuente de informaciĂłn. El propĂłsito de esta investigaciĂłn es doble: 1) aplicar una nueva visiĂłn tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implĂ­citas del usuario, y 2) proporcionar una serie de metodologĂ­as para la evaluaciĂłn de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuaciĂłn del marco de trabajo de la tesis. Resultados empĂ­ricos con usuarios reales demuestran que aprovechar la interacciĂłn implĂ­cita es un medio tanto adecuado como conveniente para mejorar de mĂșltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Multimodal interaction for deliberate practice

    Get PDF

    The Dollar General: Continuous Custom Gesture Recognition Techniques At Everyday Low Prices

    Get PDF
    Humans use gestures to emphasize ideas and disseminate information. Their importance is apparent in how we continuously augment social interactions with motion—gesticulating in harmony with nearly every utterance to ensure observers understand that which we wish to communicate, and their relevance has not escaped the HCI community\u27s attention. For almost as long as computers have been able to sample human motion at the user interface boundary, software systems have been made to understand gestures as command metaphors. Customization, in particular, has great potential to improve user experience, whereby users map specific gestures to specific software functions. However, custom gesture recognition remains a challenging problem, especially when training data is limited, input is continuous, and designers who wish to use customization in their software are limited by mathematical attainment, machine learning experience, domain knowledge, or a combination thereof. Data collection, filtering, segmentation, pattern matching, synthesis, and rejection analysis are all non-trivial problems a gesture recognition system must solve. To address these issues, we introduce The Dollar General (TDG), a complete pipeline composed of several novel continuous custom gesture recognition techniques. Specifically, TDG comprises an automatic low-pass filter tuner that we use to improve signal quality, a segmenter for identifying gesture candidates in a continuous input stream, a classifier for discriminating gesture candidates from non-gesture motions, and a synthetic data generation module we use to train the classifier. Our system achieves high recognition accuracy with as little as one or two training samples per gesture class, is largely input device agnostic, and does not require advanced mathematical knowledge to understand and implement. In this dissertation, we motivate the importance of gestures and customization, describe each pipeline component in detail, and introduce strategies for data collection and prototype selection
    • 

    corecore