8,473 research outputs found

    Moveable worlds/digital scenographies

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ Intellect Ltd 2010.The mixed reality choreographic installation UKIYO explored in this article reflects an interest in scenographic practices that connect physical space to virtual worlds and explore how performers can move between material and immaterial spaces. The spatial design for UKIYO is inspired by Japanese hanamichi and western fashion runways, emphasizing the research production company's commitment to various creative crossovers between movement languages, innovative wearable design for interactive performance, acoustic and electronic sound processing and digital image objects that have a plastic as well as an immaterial/virtual dimension. The work integrates various forms of making art in order to visualize things that are not in themselves visual, or which connect visual and kinaesthetic/tactile/auditory experiences. The ‘Moveable Worlds’ in this essay are also reflections of the narrative spaces, subtexts and auditory relationships in the mutating matrix of an installation-space inviting the audience to move around and follow its sensorial experiences, drawn near to the bodies of the dancers.Brunel University, the British Council, and the Japan Foundation

    Under construction – contemporary opera in the crossroads between new aesthetics, techniques, and technologies

    Get PDF
    Despite of its long history, opera as an art form is constantly evolving. Composers have never lost their fascination about it and keep exploring with innovative aesthetics, techniques, and modes of expression. New technologies, such as Virtual Reality (VR), Robotics and Artificial Intelligence (AI) are steadily having an impact upon the world of opera. The evolving use of performance-based software such as Ableton Live and Max/MSP has created new and exciting compositional techniques that intertwine theatrical and musical performance. This paper presents some initial work on the development of an opera using such technologies that is being composed by KallionpÀÀ and Chamberlain. Furthermore, it presents two composition case studies by KallionpÀÀ: “She” (2017) and puppet opera “Croak” (2018), as well as their documentation within the world's first 360° 3D VR recordings with full spatial audio in third-order Ambisonics and the application of an unmixing paradigm for focusing and isolating individual voices

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field

    Aesthetic potential of human-computer interaction in performing arts

    Get PDF
    Human-computer interaction (HCI) is a multidisciplinary area that studies the communication between users and computers. In this thesis, we want to examine if and how HCI when incorporated into staged performances can generate new possibilities for artistic expression on stage. We define and study four areas of technology-enhanced performance that were strongly influenced by HCI techniques: multimedia expression, body representation, body augmentation and interactive environments. We trace relevant artistic practices that contributed to the exploration of these topics and then present new forms of creative expression that emerged after the incorporation of HCI techniques. We present and discuss novel practices like: performer and the media as one responsive entity, real-time control of virtual characters, on-body projections, body augmentation through humanmachine systems and interactive stage design. The thesis concludes by showing some concrete examples of these novel practices implemented in performance pieces. We present and discuss technologyaugmented dance pieces developed during this master’s degree. We also present a software tool for aesthetic visualisation of movement data and discuss its application in video creation, staged performances and interactive installations

    Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis

    Get PDF
    International audienceWe present a method for easily drafting expressive character animation by playing with instrumented rigid objects. We parse the input 6D trajectories (position and orientation over time)-called spatial motion doodles-into sequences of actions and convert them into detailed character animations using a dataset of parameterized motion clips which are automatically fitted to the doodles in terms of global trajectory and timing. Moreover, we capture the expres-siveness of user-manipulation by analyzing Laban effort qualities in the input spatial motion doodles and transferring them to the synthetic motions we generate. We validate the ease of use of our system and the expressiveness of the resulting animations through a series of user studies, showing the interest of our approach for interactive digital storytelling applications dedicated to children and non-expert users, as well as for providing fast drafting tools for animators

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    How Geek Therapy Plays Into Expressive Arts Therapy: A Literature Review

    Get PDF
    Within this paper, I explore how geek therapy plays well with the methods of expressive arts therapy. The combination of geek therapy and expressive arts therapy can assist clinicians in immediately connecting with their clients and identifying strength-oriented narratives that honor the client’s preferences, modes of expression, and pop culture affinities. This engagement with expressive approaches utilizing affinity-based interventions can lead to a deeper sense of understanding of the client’s intra-, inter-, and extra-personal relationships. Through this literature review of expressive arts therapy and geek therapy, primarily focusing on video games in therapy, clinicians from all walks of life can explore these techniques with clients in multiple settings and within a variety of age groups. Video games are immersive, multimodal, and interactive digital experiences that can promote wellness through engaging a spectrum of cognitive processes, regulating emotion and physical states, exploring meaning, identity, and expression, and building interpersonal tools through in-person and/or virtual means. This paper explores how video games can impact bio-psycho-sociocultural-spiritual domains as well as other potentially therapeutic characteristics of video gaming, whether through in-direct/direct or active/passive experiences. Through understanding gamer motivations, this paper explores player taxonomy models and profilers that can assist in gathering assessment information. Lastly, ethical considerations and the potential for maladaptive behaviors are explored

    An investigation of performer embodiment and performative interaction on an augmented stage

    Get PDF
    This thesis concerns itself with an investigation of live performance on an augmented stage in front of an audience, where performers witness themselves as projection mapped virtual characters able to interact with projected virtual scenography. An interactive virtual character is projected onto the body of a performer, its movements congruent with the performer. Through visual feedback via a Head Mounted Display (HMD), the performer is virtually embodied in that they witness their virtualised body interacting with the virtual scenery and props of the augmented stage. The research is informed by a theoretical framework derived from theory on intermediality and performance, virtual embodiment and performative interaction. A literature review of theatrical productions and performances utilising projection identifies a research gap of providing the performer with a visual perspective of themselves in relationship to the projected scenography. The visual perspective delivered via the HMD enables the performer to perform towards the audience and away from the interactive projected backdrop. The resultant ‘turn away’ from facing an interactive screen and instead performing towards an audience is encapsulated in the concept of the ‘Embodied Performative Turn’. The practice-based research found that changing the visual perspective presented to the performer impacted differently on performative interaction and virtual embodiment. A second-person or audience perspective, ‘performer-as-observed’ prioritises the perception of the virtual body and enhances performative behaviour but challenges effective performative interaction with the virtual scenography. Conversely, a first-person perspective, ‘performer-as-observer’ prioritises a worldview and enhances performative interaction, but negatively impacts on performative behaviour with the loss of performer-as-observed. The research findings suggest that the presentation of differing perspectives to the performer can be used to selectively enhance performative interaction and performative behaviour on an augmented stage

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation
    • 

    corecore