7 research outputs found

    Virtual Viewpoint Replay for a Soccer Match by View Interpolation From Multiple Cameras

    Full text link

    Virtual View Image over Wireless Visual Sensor Network

    Full text link

    Wide-baseline object interpolation using shape prior regularization of epipolar plane images

    Get PDF
    This paper considers the synthesis of intermediate views of an object captured by two calibrated and widely spaced cameras. Based only on those two very different views, our paper proposes to reconstruct the object Epipolar Plane Image Volume [1] (EPIV), which describes the object transformation when continuously moving the viewpoint of the synthetic view in-between the two reference cameras. This problem is clearly ill-posed since the occlusions and the foreshortening effect make the reference views significantly different when the cameras are far apart. Our main contribution consists in disambiguating this ill-posed problem by constraining the interpolated views to be consistent with an object shape prior. This prior is learnt based on images captured by the two reference views, and consists in a nonlinear shape manifold representing the plausible silhouettes of the object described by Elliptic Fourier Descriptors. Experiments on both synthetic and natural images show that the proposed method preserves the topological structure of objects during the intermediate view synthesis, while dealing effectively with the self-occluded regions and with the severe foreshortening effect associated to wide-baseline camera configurations

    An Advanced A-V- Player to Support Scalable Personalised Interaction with Multi-Stream Video Content

    Get PDF
    PhDCurrent Audio-Video (A-V) players are limited to pausing, resuming, selecting and viewing a single video stream of a live broadcast event that is orchestrated by a professional director. The main objective of this research is to investigate how to create a new custom-built interactive A V player that enables viewers to personalise their own orchestrated views of live events from multiple simultaneous camera streams, via interacting with tracked moving objects, being able to zoom in and out of targeted objects, and being able to switch views based upon detected incidents in specific camera views. This involves research and development of a personalisation framework to create and maintain user profiles that are acquired implicitly and explicitly and modelling how this framework supports an evaluation of the effectiveness and usability of personalisation. Personalisation is considered from both an application oriented and a quality supervision oriented perspective within the proposed framework. Personalisation models can be individually or collaboratively linked with specific personalisation usage scenarios. The quality of different personalised interaction in terms of explicit evaluative metrics such as scalability and consistency can be monitored and measured using specific evaluation mechanisms.European Union's Seventh Framework Programme ([FP7/2007-2013]) under grant agreement No. ICT- 215248 and from Queen Mary University of London
    corecore