11,054 research outputs found
Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments
This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed
Digital video-data handling
Digital procedures for handling television pictures applied to video data obtained by Mariner and Ranger spacecraf
3D tomography of cells in micro-channels
We combine confocal imaging, microfluidics and image analysis to record
3D-images of cells in flow. This enables us to recover the full 3D
representation of several hundred living cells per minute. Whereas 3D confocal
imaging has thus far been limited to steady specimen, we overcome this
restriction and present a method to access the 3D shape of moving objects. The
key of our principle is a tilted arrangement of the micro-channel with respect
to the focal plane of the microscope. This forces cells to traverse the focal
plane in an inclined manner. As a consequence, individual layers of passing
cells are recorded which can then be assembled to obtain the volumetric
representation. The full 3D information allows for a detailed comparisons with
theoretical and numerical predictions unfeasible with e.g.\ 2D imaging. Our
technique is exemplified by studying flowing red blood cells in a micro-channel
reflecting the conditions prevailing in the microvasculature. We observe two
very different types of shapes: `croissants' and `slippers'. Additionally, we
perform 3D numerical simulations of our experiment to confirm the observations.
Since 3D confocal imaging of cells in flow has not yet been realized, we see
high potential in the field of flow cytometry where cell classification thus
far mostly relies on 1D scattering and fluorescence signals
Recommended from our members
A holographic system for subsea recording and analysis of plankton and other marine particles
We report here details of the design, development, initial testing and field-deployment of the HOLOMAR system for in-situ subsea holography and analysis of marine plankton and nonliving particles. HOLOMAR comprises a submersible holographic camera ("HoloCam") able to record in-line and off-axis holograms at depths down to 100 m, together with specialised reconstruction hardware ("HoloScan") linked to custom image processing and classification software. The HoloCam consists of a laser and power supply, holographic recording optics and holographic plate holders, a water-tight housing and a support frame. It utilises two basic holographic geometries, in-line and off-axis such that a wide range of species, sizes and concentrations can be recorded. After holograms have been recorded and processed they are reconstructed in full three-dimensional detail in air in a dedicated replay facility. A computer-controlled microscope, using video cameras to record the image at a given depth, is used to digitise the scene. Specially written software extracts a binarised image of an object in its true focal plane and is classified using a neural network. The HoloCam was deployed on two separate cruises in a Scottish sea loch (Loch Etive) to a depth of 100 m and over 300 holograms were recorded
Recommended from our members
ToScA North America (6 â 8 June 2017, The University of Texas, Austin, TX) Program
ToScA North America will address key areas of science,
including Multi-modal Imaging, Geosciences, Forensics, Increasing Contrast,
Educational Outreach, Data, Materials Science and Medical and Biological
Science.University of Texas High-Resolution X-ray CT Facility (UTCT);
Jackson School of Geosciences, The University of Texas at Austin;
Natural History Museum (London);
Royal Microscopical Society (Oxford, UK)Geological Science
WAYLA - Generating Images from Eye Movements
We present a method for reconstructing images viewed by observers based only
on their eye movements. By exploring the relationships between gaze patterns
and image stimuli, the "What Are You Looking At?" (WAYLA) system learns to
synthesize photo-realistic images that are similar to the original pictures
being viewed. The WAYLA approach is based on the Conditional Generative
Adversarial Network (Conditional GAN) image-to-image translation technique of
Isola et al. We consider two specific applications - the first, of
reconstructing newspaper images from gaze heat maps, and the second, of
detailed reconstruction of images containing only text. The newspaper image
reconstruction process is divided into two image-to-image translation
operations, the first mapping gaze heat maps into image segmentations, and the
second mapping the generated segmentation into a newspaper image. We validate
the performance of our approach using various evaluation metrics, along with
human visual inspection. All results confirm the ability of our network to
perform image generation tasks using eye tracking data
Conceptual design of an airborne laser Doppler velocimeter system for studying wind fields associated with severe local storms
An airborne laser Doppler velocimeter was evaluated for diagnostics of the wind field associated with an isolated severe thunderstorm. Two scanning configurations were identified, one a long-range (out to 10-20 km) roughly horizontal plane mode intended to allow probing of the velocity field around the storm at the higher altitudes (4-10 km). The other is a shorter range (out to 1-3 km) mode in which a vertical or horizontal plane is scanned for velocity (and possibly turbulence), and is intended for diagnostics of the lower altitude region below the storm and in the out-flow region. It was concluded that aircraft flight velocities are high enough and severe storm lifetimes are long enough that a single airborne Doppler system, operating at a range of less than about 20 km, can view the storm area from two or more different aspects before the storm characteristics change appreciably
- âŠ