35 research outputs found

    ASMRcade: interactive audio triggers for an autonomous sensory meridian response

    Get PDF
    Autonomous Sensory Meridian Response (ASMR) is a sensory phenomenon involving pleasurable tingling sensations in response to stimuli such as whispering, tapping, and hair brushing. It is increasingly used to promote health and well-being, help with sleep, and reduce stress and anxiety. ASMR triggers are both highly individual and of great variety. Consequently, finding or identifying suitable ASMR content, e.g., by searching online platforms, can take time and effort. This work addresses this challenge by introducing a novel interactive approach for users to generate personalized ASMR sounds. The presented system utilizes a generative adversarial network (GAN) for sound generation and a graphical user interface (GUI) for user control. Our system allows users to create and manipulate audio samples by interacting with a visual representation of the GAN’s latent input vector. Further, we present the results of a first user study which indicates that our approach is suitable for triggering ASMR experiences

    This is not the Texture you are looking for! Introducing Novel Counterfactual Explanations for Non-Experts using Generative Adversarial Learning

    Get PDF
    With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. In this work, we present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in a use case which was inspired by a healthcare scenario. Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP

    XR composition in the wild: the impact of user environments on creativity, UX and flow during music production in augmented reality

    Get PDF
    With the advent of HMD-based Mixed Reality, or “Spatial Computing” as framed by Apple, creativity- and productivity-related use-cases in XR, such as music production, are rising in popularity. However, even though the importance of environments for creativity is understood, XR applications for creative use cases are often evaluated in laboratories. While the mismatch of familiar creative spaces and lab environments matters little in VR, the effect on evaluation metrics for AR applications needs to be clarified. To this end, we conducted an experiment in which participants composed and produced music on an AR HMD in their preferred creative environments and a typical laboratory environment. Looking at questionnaire data, we observed similar scores for user experience, flow, and creative experience in both conditions, suggesting that supervised evaluation of AR-based creativity support applications does not require mobile demonstrators nor freely selectable environments. Based on qualitative feedback and overall high scores for UX and flow, we discuss our observations and their implications and emphasize the need for field studies in the XR creativity domain

    Flow with the beat! Human-centered design of virtual environments for musical creativity support in VR

    Get PDF
    As previous studies have shown, the environment of creative people can have a significant impact on their creative process and thus on their creations. However, with the advent of digital tools such as virtual instruments and digital audio workstations, more and more creative work is digital and decoupled from the creator’s environment. Virtual Reality technologies open up new possibilities here, as creative tools can seamlessly merge with any virtual environment the user finds himself in. This paper reports on the human-centered design process of a VR application that aims at supporting the user’s individual needs to support their creativity while composing percussive beats in virtual environments. For this purpose, we derived factors that influence creativity from literature and conducted focus group interviews in order to learn how virtual environments and 3DUI can be designed for creativity support. In a subsequent laboratory study, we let users interact with a virtual step sequencer UI in virtual environments that were either customizable or fixed/unchangeable. By analyzing post-test ratings from music experts, self-report questionnaires, and user behavior data, we examined the effects of such customizable virtual environments on user creativity, user experience, flow, and subjective creativity support scales. While we did not observe a significant impact of this independent variable on user creativity, user experience or flow, we found that users had specific individual needs regarding their virtual surroundings and strongly preferred customizable virtual environments, even though the fixed virtual environment was designed to be creatively stimulating. We also observed consistently high flow and user experience ratings, which promote human-centered design of VR-based creativity support tools in a musical context

    The AffectToolbox: Affect Analysis for Everyone

    Full text link
    In the field of affective computing, where research continually advances at a rapid pace, the demand for user-friendly tools has become increasingly apparent. In this paper, we present the AffectToolbox, a novel software system that aims to support researchers in developing affect-sensitive studies and prototypes. The proposed system addresses the challenges posed by existing frameworks, which often require profound programming knowledge and cater primarily to power-users or skilled developers. Aiming to facilitate ease of use, the AffectToolbox requires no programming knowledge and offers its functionality to reliably analyze the affective state of users through an accessible graphical user interface. The architecture encompasses a variety of models for emotion recognition on multiple affective channels and modalities, as well as an elaborate fusion system to merge multi-modal assessments into a unified result. The entire system is open-sourced and will be publicly available to ensure easy integration into more complex applications through a well-structured, Python-based code base - therefore marking a substantial contribution toward advancing affective computing research and fostering a more collaborative and inclusive environment within this interdisciplinary field

    Anonymization of faces: technical and legal perspectives

    Get PDF
    This paper explores face anonymization techniques in the context of the General Data Protection Regulation (GDPR) amidst growing privacy concerns due to the widespread use of personal data in machine learning. We focus on unstructured data, specifically facial data, and discuss two approaches to assessing re-identification risks: the risk- based approach supported by GDPR and the zero or strict approach. Emphasizing a process-oriented perspective, we argue that face anonymization should consider the overall data processing context, including the actors involved and the measures taken, to achieve legally secure anonymization under GDPR’s stringent requirements

    On the potential of modular voice conversion for virtual agents

    Get PDF

    A machine learning-driven interactive training system for extreme vocal techniques

    Get PDF
    The scarcity of vocal instructors proficient in extreme vocal techniques and the lack of individualized feedback present challenges for novices learning these techniques. Therefore, this work explores the use of neural networks to provide real-time feedback for extreme vocal techniques within an interactive training system. An Extreme-Vocal dataset created for this purpose served as the basis for training a model capable of classifying False-Cord screams, Fry screams, and a residual class. The neural network achieved an overall accuracy of 0.83. We integrated the model into a user application to enable real-time visualization of classification results. By conducting a first qualitative user study involving 12 participants, we investigated whether interacting with the training system could enhance self-efficacy regarding the correct application of extreme vocal techniques. Our study participants indicated that they found the training system helpful for learning and categorizing extreme vocal techniques

    Socially-aware personality adaptation

    Get PDF

    Jamming in MR: towards real-time music collaboration in mixed reality

    Get PDF
    corecore