64 research outputs found

    User interface for a better eye contact in videoconferencing

    Get PDF

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    Contribution To Signalling Of 3d Video Streams In Communication Systems Using The Session Initiation Protocol

    Get PDF
    Las tecnologías de vídeo en 3D han estado al alza en los últimos años, con abundantes avances en investigación unidos a una adopción generalizada por parte de la industria del cine, y una importancia creciente en la electrónica de consumo. Relacionado con esto, está el concepto de vídeo multivista, que abarca el vídeo 3D, y puede definirse como un flujo de vídeo compuesto de dos o más vistas. El vídeo multivista permite prestaciones avanzadas de vídeo, como el vídeo estereoscópico, el “free viewpoint video”, contacto visual mejorado mediante vistas virtuales, o entornos virtuales compartidos. El propósito de esta tesis es salvar un obstáculo considerable de cara al uso de vídeo multivista en sistemas de comunicación: la falta de soporte para esta tecnología por parte de los protocolos de señalización existentes, que hace imposible configurar una sesión con vídeo multivista mediante mecanismos estándar. Así pues, nuestro principal objetivo es la extensión del Protocolo de Inicio de Sesión (SIP) para soportar la negociación de sesiones multimedia con flujos de vídeo multivista. Nuestro trabajo se puede resumir en tres contribuciones principales. En primer lugar, hemos definido una extensión de señalización para configurar sesiones SIP con vídeo 3D. Esta extensión modifica el Protocolo de Descripción de Sesión (SDP) para introducir un nuevo atributo de nivel de medios, y un nuevo tipo de dependencia de descodificación, que contribuyen a describir los formatos de vídeo 3D que pueden emplearse en una sesión, así como la relación entre los flujos de vídeo que componen un flujo de vídeo 3D. La segunda contribución consiste en una extensión a SIP para manejar la señalización de videoconferencias con flujos de vídeo multivista. Se definen dos nuevos paquetes de eventos SIP para describir las capacidades y topología de los terminales de conferencia, por un lado, y la configuración espacial y mapeo de flujos de una conferencia, por el otro. También se describe un mecanismo para integrar el intercambio de esta información en el proceso de inicio de una conferencia SIP. Como tercera y última contribución, introducimos el concepto de espacio virtual de una conferencia, o un sistema de coordenadas que incluye todos los objetos relevantes de la conferencia (como dispositivos de captura, pantallas, y usuarios). Explicamos cómo el espacio virtual se relaciona con prestaciones de conferencia como el contacto visual, la escala de vídeo y la fidelidad espacial, y proporcionamos reglas para determinar las prestaciones de una conferencia a partir del análisis de su espacio virtual, y para generar espacios virtuales durante la configuración de conferencias

    An Efficient Image-Based Telepresence System for Videoconferencing

    Full text link

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Telethrone : a situated display using retro-reflection basedmulti-view toward remote collaboration in small dynamic groups

    Get PDF
    This research identifies a gap in the tele-communication technology. Several novel technology demonstrators are tested experimentally throughout the research. The presented final system allows a remote participant in a conversation to unambiguously address individual members of a group of 5 people using non-verbal cues. The capability to link less formal groups through technology is the primary contribution. Technology-mediated communication is first reviewed, with attention to different supported styles of meetings. A gap is identified for small informal groups. Small dynamic groups which are convened on demand for the solution of specific problems may be called “ad-hoc”. In these meetings it is possible to ‘pull up a chair’. This is poorly supported by current tele-communication tools, that is, it is difficult for one or more members to join such a meeting from a remote location. It is also difficult for physically located parties to reorient themselves in the meeting as goals evolve. As the major contribution toward addressing this the ’Telethrone’ is introduced. Telethrone projects a remote user onto a chair, bringing them into your space. The chair seems to act as a situated display, which can support multi party head gaze, eye gaze, and body torque. Each observer knows where the projected user is looking. It is simpler to implement and cheaper than current comparable systems. The underpinning approach is technology and systems development, with regard to HCI and psychology throughout. Prototypes, refinements, and novel engineered systems are presented. Two experiments to test these systems are peer-reviewed, and further design & experimentation undertaken based on the positive results. The final paper is pending. An initial version of the new technology approach combined retro-reflective material with aligned pairs of cameras, and projectors, connected by IP video. A counterbalanced repeated measures experiment to analyse gaze interactions was undertaken. Results suggest that the remote user is not excluded from triadic poker game-play. Analysis of the multi-view aspect of the system was inconclusive as to whether it shows advantage over a set-up which does not support multi-view. User impressions from the questionnaires suggest that the current implementation still gives the impression of being a display despite its situated nature, although participants did feel the remote user was in the space with them. A refinement of the system using models generated by visual hull reconstruction can better connect eye gaze. An exploration is made of its ability to allow chairs to be moved around the meeting, and what this might enable for the participants of the meeting. The ability to move furniture was earlier identified as an aid to natural interaction, but may also affect highly correlated subgroups in an ad-hoc meeting. This is unsupported by current technologies. Repositioning of several onlooking chairs seems to support ’fault lines’. Performance constraints of the current system are explored. An experiment tests whether it is possible to judge remote participant eye gaze as the viewer changes location, attempting to address concerns raised by the first experiment in which the physical offsets of the IP cameras lenses from the projected eyes of the remote participants (in both directions), may have influenced perception of attention. A third experiment shows that five participants viewing a remote recording, presented through the Telethrone, can judge the attention of the remote participant accurately when the viewpoint is correctly rendered for their location in the room. This is compared to a control in which spatial discrimination is impossible. A figure for how many optically seperate retro-reflected segments is obtained through spatial anlysis and testing. It is possible to render the optical maximum of 5 independent viewpoints supporting an ’ideal’ meeting of 6 people. The tested system uses one computer at the meeting side of the exchange making it potentially deployable from a small flight case. The thesis presents and tests the utility of elements toward a system, and finds that remote users are in the conversation, spatially segmented with a view for each onlooker, that eye gaze can be reconnected through the system using 3D video, and that performance supports scalability up to the theoretical maximum for the material and an ideal meeting size

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London
    corecore