256 research outputs found

    Video collections in panoramic contexts

    Full text link

    Videoscapes: Exploring Unstructured Video Collections

    No full text

    Exploring Campus through Web-Based Immersive Adventures Using Virtual Reality Photography: A Low-Cost Virtual Tour Experience

    Get PDF
    This study aims to assess the incorporation of virtual reality (VR) photography into the web-based immersive application “virtual interactive campus tour (VICT).” This application offers users an immersive experience, allowing them to virtually explore university campuses and access information about the facilities and services available. The VICT application offers a cost-effective, attractive, and sustainable alternative for universities to display their resources and interact with potential students. Through black box testing, we conducted user acceptance testing (UAT) and functionality testing, confirming the application’s readiness for deployment and its capability to meet institutional and end-user requirements. This study also examined the potential for universities to use VR to meet the expectations of prospective students. The application is compatible with both desktop and mobile devices. The results indicated that the overall average validity score was 0.88, suggesting that the measure is valid. The validation results were thoroughly tested and reliable. This study emphasizes the potential of immersive web-based tours in higher education and aims to bridge the divide between virtual exploration and physical visits. By offering an immersive virtual campus experience, this innovative tool has the potential to revolutionize university marketing strategies, increase student engagement, and transform campus visit approaches

    Exploring Sparse, Unstructured Video Collections of Places

    Get PDF
    The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyses collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating — spatially and/or temporally — between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections

    Merging Special Collections with GIS Technology to Enhance the User Experience

    Get PDF
    This analysis evaluates how PhillyHistory.org merged their unique special collection materials with geospatial-based progressive technology to challenge and educate the global community. A new generation of technologically savvy researchers has emerged that expect a more enhanced user experience than earlier generations. To meet these needs, collection managers are collaborating with community and local institutions to increase online access to materials; mixing best metadata practices with custom elements to create map mashups; and merging progressive GIS technology and geospatial based applications with their collections to enhance the user experience. The PhillyHistory.org website was analyzed to explore how they used various geospatial technology to create a new type of digital content management system based on geographical information and make their collections accessible via online software and mobile applications

    Ego perspective video indexing for life logging videos

    Get PDF
    This thesis deals with life logging videos that are recorded by head worn devices. The goal is to develop a method to filter out parts of life logging videos which are important. This means it is to determine which parts are important. To do this we take a look at how the autobiographical memory works and try to adapt an indexing mechanism which works on similar aspects. To index life logging videos with the expressive metadata successfully we first need to extract information out of the video itself. Since faces are an important part of autobiographical memory recall, image processing which consists of face detection, tracking and recognition is used. This helps to get the people in a scene. Another part is the location data which is accessed by using GPS data. After all the information is gathered we can index those information in so called events. For each event we have to define the people that are present during this event, which place and at what time the event takes place. To do this an indexing algorithm was developed which segments the video into smaller parts by using the faces, location and time. The result is a prototype algorithm which can be further developed to improve the actual segmentation of life logging videos. This project serves as an information collecting and creation application for future life logging video navigation tools.Diese Arbeit befasst sich mit Lifelogging Videos, die mit auf dem Kopf getragenen GerĂ€ten aufgenommen wurden. Das Ziel ist es eine Methode zu entwickeln, um wichtige Teile aus einem Lifelogging Video heraus zu filtern. Das bedeutet, dass wir herausfinden mĂŒssen welche Teile eines Videos ĂŒberhaupt als wichtig erachtet werden. Um die Wichtigkeit einzelner Videoabschnitte festzulegen, mĂŒssen wir herausfinden wie das autobiographische GedĂ€chtnis1 funktioniert, um einen indexing Mechanismus zu erstellen, der auf Ă€hnliche Weise funktioniert. Um die Videos mit verschiedenen Informationen zu indexen mĂŒssen zunĂ€chst diese Informationen aus dem Video selber gewonnen werden. Da Gesichter ein wichtiger Teil des autobiographischen GedĂ€chtnisses sind, wird image processing benutzt, um Gesichter aus den Videos zu erkennen. ZusĂ€tzlich können wir die GPS Daten benutzen um den Ort zu bestimmen. Nachdem die ganzen Informationen gesammelt wurden, werden sie in sogenannten Events gespeichert. FĂŒr jedes Event muss definiert werden, welche Personen an welchem Ort zu welcher Zeit auftauchen. Um eine gute Zusammensetzung von Events zu gewĂ€hrleisten wurde ein Prototyp entwickelt um Lifelogging Videos in kleinere Segmente aufzuteilen, die momentan nur auf Gesichtern, Orten und Zeit beruhen. Dieser Prototyp kann in Zukunft beliebig erweitert und verbessert werden. Dieses Projekt dient als Grundlage fĂŒr die spĂ€tere Entwicklung eines geeigneten Lifelogging Navigationstools

    Differentiator factors in the implementation of social network sites

    Get PDF
    EstĂĄgio realizado na Business Analyst da Documento CrĂ­tico - Desenvolvimento de Software, S. A. (Cardmobili) e orientado pelo Eng.ÂȘ Catarina MaiaTese de mestrado integrado. Engenharia InformĂĄtica e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    Framework for creating augmented reality (AR) experiences

    Get PDF
    Mestrado em Engenharia InformĂĄtica na Escola Superior de Tecnologia e GestĂŁo do Instituto PolitĂ©cnico de Viana do CasteloThis work proposes the architecture of a system whose main goal focuses on implementing a complete framework for creating augmented reality (AR) experiences, allowing creators to digitize reality to create their storylines. Moreover, it is an internal process with the objective of merging/grouping multimedia content, enabling clear and intuitive navigation within infinite augmented realities (based on the captured real world). This way, the user can create points of interest within their parallel realities, allowing them to navigate and traverse their new augmented worlds through an AR experience.Neste trabalho propĂ”e-se a arquitetura de um sistema em que o seu objetivo central se foca na criação de uma framework completa para criação de experiĂȘncias de Realidade Aumentada (RA), permitindo a digitalização da realidade para que criadores consigam criar as suas histĂłrias ou enredos. Posteriormente existirĂĄ um processo interno com o propĂłsito de fundir/agrupar conteĂșdo multimĂ©dia, possibilitando assim uma navegação clara e intuitiva dentro de infinitas realidades aumentadas (baseadas no mundo real capturado). Desta forma, o utilizador consegue criar pontos de interesse dentro das suas realidades paralelas, podendo navegar e percorrer os seus novos mundos aumentados atravĂ©s de uma experiĂȘncia RA
    • 

    corecore