6,901 research outputs found

    Towards virtual communities on the Web: Actors and audience

    Get PDF
    We report about ongoing research in a virtual reality environment where visitors can interact with agents that help them to obtain information, to perform certain transactions and to collaborate with them in order to get some tasks done. Our environment models a theatre in our hometown. We discuss attempts to let this environment evolve into a theatre community where we do not only have goal-directed visitors, but also visitors that that are not sure whether they want to buy or just want information or visitors who just want to look around. It is shown that we need a multi-user and multiagent environment to realize our goals. Since our environment models a theatre it is also interesting to investigate the roles of performers and audience in this environment. For that reason we discuss capabilities and personalities of agents. Some notes on the historical development of networked communities are included

    Can't read my broker face?—Tracing a motif and metaphor of expert knowledge through audiovisual images of the financial crisis

    Get PDF
    Based on the question of the representability of economy and economics in audiovisual media, developments on the financial markets have often been discussed as a depiction problem. The abstractness and complexity of economic interrelations seem to defy classical modes of storytelling and dramatization. Nevertheless, public opinion about economic changes and dependencies crucially relies on audiovisual media. But how can the public communicate in images, sounds, and words about forces that are out of sight and out of reach, and can supposedly only be adequately grasped by experts? In a case study on audiovisual images of the global financial crisis (2007–), this paper tracks and analyzes a recurring motif: the staging of expert knowledge as close-ups of expressive faces vis-à-vis computer screens in television news, documentaries, as well as feature films. It draws on the use of digital tools for corpus exploration (reverse image search) and the visualization of video annotations. By relating and comparing different staging strategies by which these “broker faces” become embodiments of turbulent market dynamics, the paper proposes to not regard them as repeated instantiations of the same metaphor, but as a developing web of cinematic metaphors. Different perspectives (news of market developments or historical accounts of crisis developments) and affective stances toward the global financial crisis are expressed in these variations of the face-screen constellation. The paper thus presents a selection of different appearances of “broker faces” as a medium for an audiovisual discourse of the global financial crisis. A concluding analysis of a scene from Margin Call focuses on its specific intertwining of expert and screen as an ambivalent movement figuration of staging insight. Between the feeling of discovery (of a potential future threat) and the sense of being haunted (by a menacing force), the film stages the emergence of a “broker face” in an atmospheric tension between suspense and melancholy. We argue that the film thereby reframes the motif and poses questions of agency, temporality, and expert knowledge

    (re)new configurations:Beyond the HCI/Art Challenge: Curating re-new 2011

    Get PDF

    Re-new - IMAC 2011 Proceedings

    Get PDF

    An Interactive Narrative Architecture Based on Filmmaking Theory

    Get PDF
    Designing and developing an interactive narrative experience includes development of story content as well as a visual composition plan for visually realizing the story content. Theatre directors, filmmakers, and animators have emphasized the importance of visual design. Choices of character placements, lighting configuration, and camera movements, have been documented by designers to have direct impact on communicating the narrative, evoking emotions and moods, and engaging viewers. Many research projects focused on adapting the narrative content to the interaction, yet little attention was given to adapting the visual presentation. In this paper, I present a new approach to interactive narrative – an approach based on filmmaking theory. I propose an interactive narrative architecture, that in addition to dynamically selecting narrative events that suit the continuously changing situation, it automatically, and in real-time, reconfigures the visual design integrating camera movements, lighting modulation, and character movements. The architecture utilizes rules extracted from filmmaking, cinematography, and visual arts theories. I argue that such adaptation will lead to increased engagement and enriched interactive narrative experience

    Adoption of AI Technology in the Music Mixing Workflow: An Investigation

    Full text link
    The integration of artificial intelligence (AI) technology in the music industry is driving a significant change in the way music is being composed, produced and mixed. This study investigates the current state of AI in the mixing workflows and its adoption by different user groups. Through semi-structured interviews, a questionnaire-based study, and analyzing web forums, the study confirms three user groups comprising amateurs, pro-ams, and professionals. Our findings show that while AI mixing tools can simplify the process and provide decent results for amateurs, pro-ams seek precise control and customization options, while professionals desire control and customization options in addition to assistive and collaborative technologies. The study provides strategies for designing effective AI mixing tools for different user groups and outlines future directions

    Universal Sleep Decoder: Aligning awake and sleep neural representation across subjects

    Full text link
    Decoding memory content from brain activity during sleep has long been a goal in neuroscience. While spontaneous reactivation of memories during sleep in rodents is known to support memory consolidation and offline learning, capturing memory replay in humans is challenging due to the absence of well-annotated sleep datasets and the substantial differences in neural patterns between wakefulness and sleep. To address these challenges, we designed a novel cognitive neuroscience experiment and collected a comprehensive, well-annotated electroencephalography (EEG) dataset from 52 subjects during both wakefulness and sleep. Leveraging this benchmark dataset, we developed the Universal Sleep Decoder (USD) to align neural representations between wakefulness and sleep across subjects. Our model achieves up to 16.6% top-1 zero-shot accuracy on unseen subjects, comparable to decoding performances using individual sleep data. Furthermore, fine-tuning USD on test subjects enhances decoding accuracy to 25.9% top-1 accuracy, a substantial improvement over the baseline chance of 6.7%. Model comparison and ablation analyses reveal that our design choices, including the use of (i) an additional contrastive objective to integrate awake and sleep neural signals and (ii) the pretrain-finetune paradigm to incorporate different subjects, significantly contribute to these performances. Collectively, our findings and methodologies represent a significant advancement in the field of sleep decoding

    Metadata annotation for dramatic texts

    Get PDF
    This paper addresses the problem of the metadata annotation for dramatic texts. Metadata for drama describe the dramatic qualities of a text, connecting them with the linguistic expressions. Relying on an ontological representation of the dramatic qualities, the paper presents a proposal for the creation of a corpus of annotated dramatic texts.Questo articolo affronta il problema dell’annotazione di metadati per i testi drammatici. I metadati per il dramma descrivono le qualità drammatiche di un testo, connettendole alle espressioni linguistiche. Basandosi su una rappresentazione ontologica delle qualità drammatiche, l’articolo presenta una proposta per la creazione di un corpus di testi drammatici annotati

    Emotional avatars

    Get PDF
    • …
    corecore