7 research outputs found

    HUMAN: Hierarchical Universal Modular ANnotator

    Full text link
    A lot of real-world phenomena are complex and cannot be captured by single task annotations. This causes a need for subsequent annotations, with interdependent questions and answers describing the nature of the subject at hand. Even in the case a phenomenon is easily captured by a single task, the high specialisation of most annotation tools can result in having to switch to another tool if the task only slightly changes. We introduce HUMAN, a novel web-based annotation tool that addresses the above problems by a) covering a variety of annotation tasks on both textual and image data, and b) the usage of an internal deterministic state machine, allowing the researcher to chain different annotation tasks in an interdependent manner. Further, the modular nature of the tool makes it easy to define new annotation tasks and integrate machine learning algorithms e.g., for active learning. HUMAN comes with an easy-to-use graphical user interface that simplifies the annotation task and management.Comment: 7 pages, 4 figures, EMNLP - Demonstrations 202

    ForDigitStress: A multi-modal stress dataset employing a digital job interview scenario

    Full text link
    We present a multi-modal stress dataset that uses digital job interviews to induce stress. The dataset provides multi-modal data of 40 participants including audio, video (motion capturing, facial recognition, eye tracking) as well as physiological information (photoplethysmography, electrodermal activity). In addition to that, the dataset contains time-continuous annotations for stress and occurred emotions (e.g. shame, anger, anxiety, surprise). In order to establish a baseline, five different machine learning classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest, Long-Short-Term Memory Network) have been trained and evaluated on the proposed dataset for a binary stress classification task. The best-performing classifier achieved an accuracy of 88.3% and an F1-score of 87.5%

    Behavioral patterns in robotic collaborative assembly: comparing neurotypical and Autism Spectrum Disorder participants

    Get PDF
    Introduction: In Industry 4.0, collaborative tasks often involve operators working with collaborative robots (cobots) in shared workspaces. Many aspects of the operator's well-being within this environment still need in-depth research. Moreover, these aspects are expected to differ between neurotypical (NT) and Autism Spectrum Disorder (ASD) operators. Methods: This study examines behavioral patterns in 16 participants (eight neurotypical, eight with high-functioning ASD) during an assembly task in an industry-like lab-based robotic collaborative cell, enabling the detection of potential risks to their well-being during industrial human-robot collaboration. Each participant worked on the task for five consecutive days, 3.5 h per day. During these sessions, six video clips of 10 min each were recorded for each participant. The videos were used to extract quantitative behavioral data using the NOVA annotation tool and analyzed qualitatively using an ad-hoc observational grid. Also, during the work sessions, the researchers took unstructured notes of the observed behaviors that were analyzed qualitatively. Results: The two groups differ mainly regarding behavior (e.g., prioritizing the robot partner, gaze patterns, facial expressions, multi-tasking, and personal space), adaptation to the task over time, and the resulting overall performance. Discussion: This result confirms that NT and ASD participants in a collaborative shared workspace have different needs and that the working experience should be tailored depending on the end-user's characteristics. The findings of this study represent a starting point for further efforts to promote well-being in the workplace. To the best of our knowledge, this is the first work comparing NT and ASD participants in a collaborative industrial scenario

    Affective Game Computing: A Survey

    Full text link
    This paper surveys the current state of the art in affective computing principles, methods and tools as applied to games. We review this emerging field, namely affective game computing, through the lens of the four core phases of the affective loop: game affect elicitation, game affect sensing, game affect detection and game affect adaptation. In addition, we provide a taxonomy of terms, methods and approaches used across the four phases of the affective game loop and situate the field within this taxonomy. We continue with a comprehensive review of available affect data collection methods with regards to gaming interfaces, sensors, annotation protocols, and available corpora. The paper concludes with a discussion on the current limitations of affective game computing and our vision for the most promising future research directions in the field

    Towards Robust and Deployable Gesture and Activity Recognisers

    Get PDF
    Smartphones and wearables have become an extension of one's self, with gestures providing quick access to command execution, and activity tracking helping users log their daily life. Recent research in gesture recognition points towards common events like a user re-wearing or readjusting their smartwatch deteriorate recognition accuracy significantly. Further, the available state-of-the-art deep learning models for gesture or activity recognition are too large and computationally heavy to be deployed and run continuously in the background. This problem of engineering robust yet deployable gesture recognisers for use in wearables is open-ended. This thesis provides a review of known approaches in machine learning and human activity recognition (HAR) for addressing model robustness. This thesis also proposes variations of convolution based models for use with raw or spectrogram sensor data. Finally, a cross-validation based evaluation approach for quantifying individual and situational-variabilities is used to demonstrate that with an application-oriented design, models can be made two orders of magnitude smaller while improving on both recognition accuracy and robustness

    NOVA - a tool for eXplainable Cooperative Machine Learning

    Get PDF
    In this paper, we introduce a next-generation annotation tool called NOVA, which implements a workflow that interactively incorporates the `human in the loop'. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect of NOVA is the possibility of applying semi-supervised active learning where Machine Learning techniques are used already during the annotation process by giving the possibility to pre-label data automatically. Furthermore, NOVA implements recent eXplainable AI (XAI) techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. This way, annotators get to understand whether they can trust their ML models, or more annotated data is necessary
    corecore