731 research outputs found

    Collaborative geographic visualization

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe present document is a revision of essential references to take into account when developing ubiquitous Geographical Information Systems (GIS) with collaborative visualization purposes. Its chapters focus, respectively, on general principles of GIS, its multimedia components and ubiquitous practices; geo-referenced information visualization and its graphical components of virtual and augmented reality; collaborative environments, its technological requirements, architectural specificities, and models for collective information management; and some final considerations about the future and challenges of collaborative visualization of GIS in ubiquitous environment

    Quality of experience in telemeetings and videoconferencing: a comprehensive survey

    Get PDF
    Telemeetings such as audiovisual conferences or virtual meetings play an increasingly important role in our professional and private lives. For that reason, system developers and service providers will strive for an optimal experience for the user, while at the same time optimizing technical and financial resources. This leads to the discipline of Quality of Experience (QoE), an active field originating from the telecommunication and multimedia engineering domains, that strives for understanding, measuring, and designing the quality experience with multimedia technology. This paper provides the reader with an entry point to the large and still growing field of QoE of telemeetings, by taking a holistic perspective, considering both technical and non-technical aspects, and by focusing on current and near-future services. Addressing both researchers and practitioners, the paper first provides a comprehensive survey of factors and processes that contribute to the QoE of telemeetings, followed by an overview of relevant state-of-the-art methods for QoE assessment. To embed this knowledge into recent technology developments, the paper continues with an overview of current trends, focusing on the field of eXtended Reality (XR) applications for communication purposes. Given the complexity of telemeeting QoE and the current trends, new challenges for a QoE assessment of telemeetings are identified. To overcome these challenges, the paper presents a novel Profile Template for characterizing telemeetings from the holistic perspective endorsed in this paper

    Spatially Aware Computing for Natural Interaction

    Get PDF
    Spatial information refers to the location of an object in a physical or digital world. Besides, it also includes the relative position of an object related to other objects around it. In this dissertation, three systems are designed and developed. All of them apply spatial information in different fields. The ultimate goal is to increase the user friendliness and efficiency in those applications by utilizing spatial information. The first system is a novel Web page data extraction application, which takes advantage of 2D spatial information to discover structured records from a Web page. The extracted information is useful to re-organize the layout of a Web page to fit mobile browsing. The second application utilizes the 3D spatial information of a mobile device within a large paper-based workspace to implement interactive paper that combines the merits of paper documents and mobile devices. This application can overlay digital information on top of a paper document based on the location of a mobile device within a workspace. The third application further integrates 3D space information with sound detection to realize an automatic camera management system. This application automatically controls multiple cameras in a conference room, and creates an engaging video by intelligently switching camera shots among meeting participants based on their activities. Evaluations have been made on all three applications, and the results are promising. In summary, this dissertation comprehensively explores the usage of spatial information in various applications to improve the usability

    Removing spatial boundaries in immersive mobile communications

    Get PDF
    Despite a worldwide trend towards mobile computing, current telepresence experiences focus on stationary desktop computers, limiting how, when, and where researched solutions can be used. In this thesis I demonstrate that mobile phones are a capable platform for future research, showing the effectiveness of the communications possible through their inherent portability and ubiquity. I first describe a framework upon which future systems can be built, which allows two distant users to explore one of several panoramic representations of the local environment by reorienting their device. User experiments demonstrate this framework's ability to induce a sense of presence within the space and between users, and show that capturing this environment live provides no significant benefits over constructing it incrementally. This discovery enables a second application that allows users to explore a three-dimensional representation of their environment. Each user's position is shown as an avatar, with live facial capture to facilitate natural communication. Either may also see the full environment by occupying the same virtual space. This application is also evaluated and shown to provide efficient communications to its users, providing a novel untethered experience not possible on stationary hardware despite the inherent lack of computational ability available on mobile devices

    MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

    Get PDF
    Remote collaborative work has become pervasive in many settings, from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example, to assist in designing new devices such as customized prosthetics, vehicles, or buildings. However, discussing shared 3D content face-to-face has various challenges, such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, leading to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user\'s gestures to correctly reflect them in the local user\'s reference space when face-to-face. We introduce a novel metric called pointing agreement to measure what two users perceive in common when using pointing gestures in a shared 3D space. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Registro espacial 2D–3D para a inspeção remota de subestações de energia

    Get PDF
    Remote inspection and supervisory control are critical features for smart factories, civilian surveillance, power systems, among other domains. For reducing the time to make decisions, operators must have both a high situation awareness, implying a considerable amount of data to be presented, and minimal sensory load. Recent research suggests the adoption of computer vision techniques for automatic inspection, as well as virtual reality (VR) as an alternative to traditional SCADA interfaces. Nevertheless, although VR may provide a good representation of a substation’s state, it lacks some real-time information, available from online field cameras and microphones. Since these two sources of information (VR and field information) are not integrated into one single solution, we miss the opportunity of using VR as a SCADA-aware remote inspection tool, during operation and disaster-response routines. This work discusses a method to augment virtual environments of power substations with field images, enabling operators to promptly see a virtual representation of the inspected area's surroundings. The resulting environment is integrated with an image-based state inference machine, continuously checking the inferred states against the ones reported by the SCADA database. Whenever a discrepancy is found, an alarm is triggered and the virtual camera can be immediately teleported to the affected region, speeding up system reestablishment. The solution is based on a client-server architecture and allows multiple cameras deployed in multiple substations. Our results concern the quality of the 2D–3D registration and the rendering framerate for a simple scenario. The collected quantitative metrics suggest good camera pose estimations and registrations, as well as an arguably optimal rendering framerate for substations' equipment inspection.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorCEMIG - Companhia Energética de Minas GeraisCNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPEMIG - Fundação de Amparo a Pesquisa do Estado de Minas GeraisTese (Doutorado)A inspeção remota e o controle supervisório são requisitos críticos para fábricas modernas, vigilância de civis, sistemas de energia e outras áreas. Para reduzir o tempo da tomada de decisão, os operadores precisam de uma elevada consciência da situação em campo, o que implica em uma grande quantidade de dados a serem apresentados, mas com menor carga sensorial possível. Estudos recentes sugerem a adoção de técnicas de visão computacional para inspeção automática, e a Realidade Virtual (VR) como uma alternativa às interfaces tradicionais do SCADA. Entretanto, apesar de fornecer uma boa representação do estado da subestação, os ambientes virtuais carecem de algumas informações de campo, provenientes de câmeras e microfones. Como essas duas fontes de dados (VR e dispositivos de captura) não são integrados em uma única solução, perde-se a oportunidade de usar VR como uma ferramenta de inspeção remota conectada ao SCADA, durante a operação e rotinas de respostas a desastres. Este trabalho trata de um método para aumentar ambientes virtuais de subestações com imagens de campo, permitindo aos operadores a rápida visualização de uma representação virtual do entorno da área monitorada. O ambiente resultante é integrado com uma máquina de inferência estados por imagens, comparando continuamente os estados inferidos com aqueles reportados pela base SCADA. Na ocasião de uma discrepância, um alarme é gerado e possibilita que a câmera virtual seja imediatamente teletransportada para a região afetada, acelerando o processo de retomada do sistema. A solução se baseia em uma arquitetura cliente-servidor e permite múltiplas câmeras presentes em múltiplas subestações. Os resultados dizem respeito à qualidade do registro 2D–3D e à taxa de renderização para um cenário simples. As métricas quantitativas coletadas sugerem bons níveis de registro e estimativa de pose de câmera, além de uma taxa ótima de renderização para fins de inspeção de equipamentos em subestações

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    An extended AI-experience : Industry 5.0 in creative product innovation

    Get PDF
    Creativity plays a significant role in competitive product ideation. With the increasing emergence of Virtual Reality (VR) and Artificial Intelligence (AI) technologies, the link between such technologies and product ideation is explored in this research to assist and augment creative scenarios in the engineering field. A bibliographic analysis is performed to review relevant fields and their relationships. This is followed by a review of current challenges in group ideation and state-of-the-art technologies with the aim of addressing them in this study. This knowledge is applied to the transformation of current ideation scenarios into a virtual environment using AI. The aim is to augment designers’ creative experiences, a core value of Industry 5.0 that focuses on human-centricity, social and ecological benefits. For the first time, this research reclaims brainstorming as a challenging and inspiring activity where participants are fully engaged through a combination of AI and VR technologies. This activity is enhanced through three key areas: facilitation, stimulation, and immersion. These areas are integrated through intelligent team moderation, enhanced communication techniques, and access to multi-sensory stimuli during the collaborative creative process, therefore providing a platform for future research into Industry 5.0 and smart product development
    corecore