11,054 research outputs found

    Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments

    Get PDF
    This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed

    Digital video-data handling

    Get PDF
    Digital procedures for handling television pictures applied to video data obtained by Mariner and Ranger spacecraf

    3D tomography of cells in micro-channels

    Get PDF
    We combine confocal imaging, microfluidics and image analysis to record 3D-images of cells in flow. This enables us to recover the full 3D representation of several hundred living cells per minute. Whereas 3D confocal imaging has thus far been limited to steady specimen, we overcome this restriction and present a method to access the 3D shape of moving objects. The key of our principle is a tilted arrangement of the micro-channel with respect to the focal plane of the microscope. This forces cells to traverse the focal plane in an inclined manner. As a consequence, individual layers of passing cells are recorded which can then be assembled to obtain the volumetric representation. The full 3D information allows for a detailed comparisons with theoretical and numerical predictions unfeasible with e.g.\ 2D imaging. Our technique is exemplified by studying flowing red blood cells in a micro-channel reflecting the conditions prevailing in the microvasculature. We observe two very different types of shapes: `croissants' and `slippers'. Additionally, we perform 3D numerical simulations of our experiment to confirm the observations. Since 3D confocal imaging of cells in flow has not yet been realized, we see high potential in the field of flow cytometry where cell classification thus far mostly relies on 1D scattering and fluorescence signals

    WAYLA - Generating Images from Eye Movements

    Full text link
    We present a method for reconstructing images viewed by observers based only on their eye movements. By exploring the relationships between gaze patterns and image stimuli, the "What Are You Looking At?" (WAYLA) system learns to synthesize photo-realistic images that are similar to the original pictures being viewed. The WAYLA approach is based on the Conditional Generative Adversarial Network (Conditional GAN) image-to-image translation technique of Isola et al. We consider two specific applications - the first, of reconstructing newspaper images from gaze heat maps, and the second, of detailed reconstruction of images containing only text. The newspaper image reconstruction process is divided into two image-to-image translation operations, the first mapping gaze heat maps into image segmentations, and the second mapping the generated segmentation into a newspaper image. We validate the performance of our approach using various evaluation metrics, along with human visual inspection. All results confirm the ability of our network to perform image generation tasks using eye tracking data

    Conceptual design of an airborne laser Doppler velocimeter system for studying wind fields associated with severe local storms

    Get PDF
    An airborne laser Doppler velocimeter was evaluated for diagnostics of the wind field associated with an isolated severe thunderstorm. Two scanning configurations were identified, one a long-range (out to 10-20 km) roughly horizontal plane mode intended to allow probing of the velocity field around the storm at the higher altitudes (4-10 km). The other is a shorter range (out to 1-3 km) mode in which a vertical or horizontal plane is scanned for velocity (and possibly turbulence), and is intended for diagnostics of the lower altitude region below the storm and in the out-flow region. It was concluded that aircraft flight velocities are high enough and severe storm lifetimes are long enough that a single airborne Doppler system, operating at a range of less than about 20 km, can view the storm area from two or more different aspects before the storm characteristics change appreciably
    • 

    corecore