2,397 research outputs found

    An immersive system for browsing and visualizing surveillance video

    Get PDF
    HouseFly is an interactive data browsing and visualization system that synthesizes audio-visual recordings from multiple sensors, as well as the meta-data derived from those recordings, into a unified viewing experience. The system is being applied to study human behavior in both domestic and retail situations grounded in longitudinal video recordings. HouseFly uses an immersive video technique to display multiple streams of high resolution video using a realtime warping procedure that projects the video onto a 3D model of the recorded space. The system interface provides the user with simultaneous control over both playback rate and vantage point, enabling the user to navigate the data spatially and temporally. Beyond applications in video browsing, this system serves as an intuitive platform for visualizing patterns over time in a variety of multi-modal data, including person tracks and speech transcripts.United States. Office of Naval Research (Award no. N000140910187

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    V-Sphere Rubik's Bookcase Interface for Exploring Content in Virtual Reality Marketplace

    Get PDF
    In this work, we developed a new interface concept for content exploring in immersing Virtual Reality environments. In our shopping interface, we represent products as true 3D shapes with global illumination effects. This representation can provide more realistic and consistent Virtual Reality experience. Our shopping interface is really a giant spherical Rubik’s cube that consists of closed loops of book-shelves or cabinets. Users, who are located inside of this spherical Rubik interface will feel like they are in front of a spherical bookcase that consists of an infinite number of rows and columns. They can view the products by simply sliding rows horizontally and by sliding columns vertically. Further more, we discovered additional scenarios where users can grab the products by distance and examine their suitability by placing them into real environment. This new 3D interface concept can help to develop more realistic 3D interactive shopping framework in the future

    V-Sphere Rubik's Bookcase Interface for Exploring Content in Virtual Reality Marketplace

    Get PDF
    In this work, we developed a new interface concept for content exploring in immersing Virtual Reality environments. In our shopping interface, we represent products as true 3D shapes with global illumination effects. This representation can provide more realistic and consistent Virtual Reality experience. Our shopping interface is really a giant spherical Rubik’s cube that consists of closed loops of book-shelves or cabinets. Users, who are located inside of this spherical Rubik interface will feel like they are in front of a spherical bookcase that consists of an infinite number of rows and columns. They can view the products by simply sliding rows horizontally and by sliding columns vertically. Further more, we discovered additional scenarios where users can grab the products by distance and examine their suitability by placing them into real environment. This new 3D interface concept can help to develop more realistic 3D interactive shopping framework in the future

    VIRTUAL TOURS FOR SMART CITIES: A COMPARATIVE PHOTOGRAMMETRIC APPROACH FOR LOCATING HOT-SPOTS IN SPHERICAL PANORAMAS

    Get PDF
    The paper aims to investigate the possibilities of using the panorama-based VR to survey data related to that set of activities for planning and management of urban areas, belonging to the Smart Cities strategies. The core of our workflow is to facilitate the visualization of the data produced by the infrastructures of the Smart Cities. A graphical interface based on spherical panoramas, instead of complex three-dimensional could help the user/citizen of the city to better know the operation related to control units spread in the urban area. From a methodological point of view three different kind of spherical panorama acquisition has been tested and compared in order to identify a semi-automatic procedure for locating homologous points on two or more spherical images starting from a point cloud obtained from the same images. The points thus identified allow to quickly identify the same hot-spot on multiple images simultaneously. The comparison shows how all three systems have proved to be useful for the purposes of the research but only one has proved to be reliable from a geometric point of view to identify the locators useful for the construction of the virtual tour
    • …
    corecore