2 research outputs found

    Towards achieving convincing live interaction in a mixed reality environment for television studios

    Get PDF
    The virtual studio is a form of Mixed Reality environment for creating television programmes, where the (real) actor appears to exist within an entirely virtual set. The work presented in this thesis evaluates the routes required towards developing a virtual studio that extends from current architectures in allowing realistic interactions between the actor and the virtual set in real-time. The methodologies and framework presented in this thesis is intended to support future work in this domain. Heuristic investigation is offered as a framework to analyse and provide the requirements for developing interaction within a virtual studio. In this framework a group of experts participate in case study scenarios to generate a list of requirements that guide future development of the technology. It is also concluded that this method could be used in a cyclical manner to further refine systems postdevelopment. This leads to the development of three key areas. Firstly a feedback system is presented, which tracks actor head motion within the studio and provides dynamic visual feedback relative to their current gaze location. Secondly a real-time actor/virtual set occlusion system that uses skeletal tracking data and depth information to change the relative location of virtual set elements dynamically is developed. Finally an interaction system is presented that facilitates real-time interaction between an actor and the virtual set objects, providing both single handed and bimanual interactions. Evaluation of this system highlights some common errors in mixed reality interaction, notably those arising from inaccurate hand placement when actors perform bimanual interactions. A novel two stage framework is presented that measures the magnitude of the errors in actor hand placement, and also, the perceived fidelity of the interaction from a third person viewer. The first stage of this framework quantifies the actor motion errors while completing a series of interaction tasks under varying controls. The second stage uses examples of these errors to measure the perceptual tolerance of a third person when viewing interaction errors in the end broadcast. The results from this two stage evaluation lead to the development of three methods for mitigating the actor errors, with each evaluated against its ability to aid in the visual fidelity of the interaction. It was discovered that the adapting the size of the virtual object was effective in improving the quality of the interaction, whereas adapting the colour of any exposed background did not have any apparent effects. Finally a set of guidelines based on these findings is provided to recommend appropriate solutions that can be applied for allowing interaction within live virtual studio environments that can easily be adapted for other mixed reality systems

    Measurements of live actor motion in mixed reality interaction

    No full text
    This paper presents a method for measuring the magnitude and impact of errors in mixed reality interactions. We define the errors as measurements of hand placement accuracy and consistency within bimanual movement of an interactive virtual object. First, a study is presented which illustrates the amount of variability between the hands and the mean distance of the hands from the surfaces of a common virtual object. The results allow a discussion of the most significant factors which should be considered in the frame of developing realistic mixed reality interaction systems. The degree of error was found to be independent of interaction speed, whilst the size of virtual object and the position of the hands are significant. Second, a further study illustrates how perceptible these errors are to a third person viewer of the interaction (e.g. an audience member). We found that interaction errors arising from the overestimation of an object surface affected the visual credibility for the viewer considerably more than an underestimation of the object. This work is presented within the application of a real-time Interactive Virtual Television Studio, which offers convincing realtime interaction for live TV production. We believe the results and methodology presented here could also be applied for designing, implementing and assessing interaction quality in many other Mixed Reality applications. © 2014 IEEE
    corecore