41 research outputs found

    Full Body Acting Rehearsal in a Networked Virtual Environment-A Case Study

    Get PDF
    In order to rehearse for a play or a scene from a movie, it is generally required that the actors are physically present at the same time in the same place. In this paper we present an example and experience of a full body motion shared virtual environment (SVE) for rehearsal. The system allows actors and directors to meet in an SVE in order to rehearse scenes for a play or a movie, that is, to perform some dialogue and blocking (positions, movements, and displacements of actors in the scene) rehearsal through a full body interactive virtual reality (VR) system. The system combines immersive VR rendering techniques as well as network capabilities together with full body tracking. Two actors and a director rehearsed from separate locations. One actor and the director were in London (located in separate rooms) while the second actor was in Barcelona. The Barcelona actor used a wide field-of-view head-tracked head-mounted display, and wore a body suit for real-time motion capture and display. The London actor was in a Cave system, with head and partial body tracking. Each actor was presented to the other as an avatar in the shared virtual environment, and the director could see the whole scenario on a desktop display, and intervene by voice commands. A video stream in a window displayed in the virtual environment also represented the director. The London participant was a professional actor, who afterward commented on the utility of the system for acting rehearsal. It was concluded that full body tracking and corresponding real-time display of all the actors' movements would be a critical requirement, and that blocking was possible down to the level of detail of gestures. Details of the implementation, actors, and director experiences are provided

    Repurpose 2D Character Animations for a VR Environment Using BDH Shape Interpolation.

    Get PDF
    Virtual Reality technology has spread rapidly in recent years. However, its growth risks ending soon due to the absence of quality content, except for few exceptions. We present an original framework that allows artists to use 2D characters and animations in a 3D Virtual Reality environment, in order to give an easier access to the production of content for the platform. In traditional platforms, 2D animation represents a more economic and immediate alternative to 3D. The challenge in adapting 2D characters to a 3D environment is to interpret the missing depth information. A 2D character is actually flat, so there is not any depth information, and every body part is at the same level of the others. We exploit mesh interpolation, billboarding and parallax scrolling to simulate the depth between each body segment of the character. We have developed a prototype of the system, and extensive tests with a 2D animation production show the effectiveness of our framework

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Real-Time Global Illumination for VR Applications

    Full text link

    With you – an experimental end-to-end telepresence system using video-based reconstruction

    Get PDF
    We introduce withyou, our telepresence research platform. A systematic explanation of the theory brings together the linked nature of non-verbal communication and how it is influenced by technology. This leads to functional requirements for telepresence, in terms of the balance of visual, spatial and temporal qualities. The first end-to-end description of withyou describes all major processes and the display and capture environment. This includes two approaches to reconstructing the human form in 3D, from live video. An unprecedented characterization of our approach is given in terms of the above qualities, and influences of approach. This leads to non-functional requirements in terms of number and place of cameras and the avoidance of a resultant bottlekneck. Proposals are given for improved distribution of processes across networks, computers, and multi-core CPU and GPU. Simple conservative estimation shows that both approaches should meet our requirements. One is implemented and shown to meet minimum and come close to desirable requirements

    Bystander responses to a violent incident in an immersive virtual environment

    Get PDF
    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the ingroup condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation

    Networks for Future Services in a Smart City:Lessons Learned from the Connected OFCity Challenge 2017

    Get PDF
    The drive toward ubiquitous communications has long been encompassed by the concept of a connected or smart city. The idea that data transfer and real-time data analysis can enhance the quality of life for urban inhabitants is compelling, and one can easily envision the provision of exciting new services and applications that such an information-driven city could provide. The challenge in achieving a truly smart city stems largely from communications technologies-fixed line, wireless, backhaul, and fronthaul-and how these are combined to provide fast, reliable, and secure communications coverage. Here, we report on the key observations from the Connected OFCity Challenge competition, held at OFC 2017, which addressed the fixed and wireless access network requirements for smart cities. It is shown that from a technological perspective, future optical networks will be capable of securely supporting extremely low-latency and high-bandwidth applications. However, as shown by using Networked Music Performance as a particularly challenging example application, how readily this is achieved will depend on the interplay between wired and wireless access services. © 1979-2012 IEEE

    The body fades away: investigating the effects of transparency of an embodied virtual body on pain threshold and body ownership

    Get PDF
    The ffeelffing off “ownershffip” over an external dummy/vffirtual body (or body part) has been proven to have both physffiologffical and behavffioural consequences. For ffinstance, the vffisffion off an “embodffied” dummy or vffirtual body can modulate paffin perceptffion. However, the ffimpact off partffial or total ffinvffisffibffilffity off the body on physffiology and behavffiour has been hardly explored sffince ffit presents obvffious dffifficultffies ffin the real world. In thffis study we explored how body transparency affects both body ownershffip and paffin threshold. By means off vffirtual realffity, we presented healthy partfficffipants wffith a vffirtual co-located body wffith ffour dffifferent levels off transparency, whffile partfficffipants were tested ffor paffin threshold by ffincreasffing ramps off heat stffimulatffion. We ffound that the strength off the body ownershffip ffillusffion decreases when the body gets more transparent. Nevertheless, ffin the condffitffions where the body was semffi-transparent, hffigher levels off ownershffip over a see-through body resulted ffin an ffincreased paffin sensffitffivffity. Vffirtual body ownershffip can be used ffor the development off paffin management ffinterventffions. However, we demonstrate that provffidffing ffinvffisffibffilffity off the body does not ffincrease paffin threshold. Thereffore, body transparency ffis not a good strategy to decrease paffin ffin clffinffical contexts, yet thffis remaffins to be tested
    corecore