138 research outputs found
Virtuality Supports Reality for e-Health Applications
Strictly speaking the word “virtuality” or the expression “virtual reality” refers to an application for things simulated or created by the computer, which not really exist. More and more often such things are becoming equally referred with the adjective “virtual” or “digital” or mentioned with the prefixes “e-” or “cyber-”. So we know, for instance, of virtual or digital or e- or cyber- community, cash, business, greetings, books .. till even pets.
The virtuality offers interesting advantages with respect to the “simple” reality, since it can reproduce, augment and even overcome the reality.
The reproduction is not intended as it has been so far that a camera films a scenario from a fixed point of view and a player shows it, but today it is possible to reproduce the scene dynamically moving the point of view in practically any directions, and “real” becomes “realistic”.
The virtuality can augment the reality in the sense that graphics are pulled out from a television screen (or computer/laptop/palm display) and integrated with the real world environments. In this way useful, and often in somehow essentials, information are added for the user. As an example new apps are now available even for iphone users who can obtain graphical information overlapped on camera played real scene surroundings, so directly reading the height of mountains, names of streets, lined up of satellites .., directly over the real mountains, the real streets, the real sky.
But the virtuality can even overcome reality, since it can produce and make visible the hidden or inaccessible or old reality and even provide an alternative not real world. So we can virtually see deeply into the matter till atomic dimensions, realize a virtual tour in a past century or give visibility to hypothetical lands otherwise difficult or impossible to simple describe.
These are the fundamental reasons for a naturally growing interest in “producing” virtuality. So here we will discuss about some of the different available methods to “produce” virtuality, in particular pointing out some steps necessary for “crossing” reality “towards” virtuality. But between these two parallel worlds, as the “real” and the “virtual” ones are, interactions can exist and this can lead to some further advantages.
We will treat about the “production” and the “interaction” with the aim to focus the attention on how the virtuality can be applied in biomedical fields, since it has been demonstrated that virtual reality can furnish important and relevant benefits in e-health applications.
As an example virtual tomography joins together 3D imaging anatomical features from several CT (Computerized axial Tomography) or MRI (Magnetic Resonance Imaging) images overlapped with a computer-generated kinesthetic interface so to obtain a useful tool in diagnosis and healing. With the new endovascular simulation possibilities, a head mounted display superimposes 3D images on the patient’s skin so to furnish a direction for implantable devices inside blood vessels.
Among all, we chose to investigate the fields where we believe the virtual applications can furnish the meaningful advantages, i.e. in surgery simulation, in cognitive and neurological rehabilitation, in postural and motor training, in brain computer interface. We will furnish to the reader a necessary partial but at the same time fundamental view on what the virtual reality can do to improve possible medical treatment and so, at the end, resulting a better quality of our life
Depth measurement in integral images.
The development of a satisfactory the three-dimensional image system is a constant pursuit of the scientific community and entertainment industry. Among the many different methods of producing three-dimensional images, integral imaging is a technique that is capable of creating and encoding a true volume spatial optical model of the object scene in the form of a planar intensity distribution by using unique optical components. The generation of depth maps from three-dimensional integral images is of major importance for modern electronic display systems to enable content-based interactive manipulation and content-based image coding. The aim of this work is to address the particular issue of analyzing integral images in order to extract depth information from the planar recorded integral image.
To develop a way of extracting depth information from the integral image, the unique characteristics of the three-dimensional integral image data have been analyzed and the high correlation existing between the pixels at one microlens pitch distance interval has been discovered. A new method of extracting depth information from viewpoint image extraction is developed. The viewpoint image is formed by sampling pixels at the same local position under different micro-lenses. Each viewpoint image is a two-dimensional parallel projection of the three-dimensional scene. Through geometrically analyzing the integral recording process, a depth equation is derived which describes the mathematic relationship between object depth and the corresponding viewpoint images displacement. With the depth equation, depth estimation is then converted to the task of disparity analysis. A correlation-based block matching approach is chosen to find the disparity among viewpoint images.
To improve the performance of the depth estimation from the extracted viewpoint images, a modified multi-baseline algorithm is developed, followed by a neighborhood constraint and relaxation technique to improve the disparity analysis. To deal with the homogenous region and object border where the correct depth estimation is almost impossible from disparity analysis, two techniques, viz. Feature Block Pre-selection and “Consistency Post-screening, are further used. The final depth maps generated from the available integral image data have achieved very good visual effects
Acceleration Techniques for Photo Realistic Computer Generated Integral Images
The research work presented in this thesis has approached the task of accelerating the
generation of photo-realistic integral images produced by integral ray tracing.
Ray tracing algorithm is a computationally exhaustive algorithm, which spawns one ray
or more through each pixel of the pixels forming the image, into the space containing
the scene. Ray tracing integral images consumes more processing time than normal
images. The unique characteristics of the 3D integral camera model has been analysed
and it has been shown that different coherency aspects than normal ray tracing can be
investigated in order to accelerate the generation of photo-realistic integral images.
The image-space coherence has been analysed describing the relation between rays and
projected shadows in the scene rendered. Shadow cache algorithm has been adapted in
order to minimise shadow intersection tests in integral ray tracing. Shadow intersection
tests make the majority of the intersection tests in ray tracing. Novel pixel-tracing
styles are developed uniquely for integral ray tracing to improve the image-space
coherence and the performance of the shadow cache algorithm. Acceleration of the
photo-realistic integral images generation using the image-space coherence information
between shadows and rays in integral ray tracing has been achieved with up to 41 % of
time saving. Also, it has been proven that applying the new styles of pixel-tracing does
not affect of the scalability of integral ray tracing running over parallel computers.
The novel integral reprojection algorithm has been developed uniquely through
geometrical analysis of the generation of integral image in order to use the tempo-spatial
coherence information within the integral frames. A new derivation of integral
projection matrix for projecting points through an axial model of a lenticular lens has
been established. Rapid generation of 3D photo-realistic integral frames has been
achieved with a speed four times faster than the normal generation
Use of Depth Perception for the Improved Understanding of Hydrographic Data
This thesis has reviewed how increased depth perception can be used to increase the
understanding of hydrographic data First visual cues and various visual displays and
techniques were investigated. From this investigation 3D stereoscopic techniques prove to
be superior in improving the depth perception and understanding of spatially related data
and a further investigation on current 3D stereoscopic visualisation techniques was carried
out. After reviewing how hydrographic data is currently visualised it was decided that the
chromo stereoscopic visualisation technique is preferred to be used for further research on
selected hydrographic data models. A novel chromo stereoscopic application was
developed and the results from the evaluation on selected hydrographic data models clearly
show an improved depth perception and understanding of the data models
A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel
Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications
An overview of the holographic display related tasks within the European 3DTV project
A European consortium has been working since September 2004 on all video-based technical aspects of three-dimensional television. The group has structured its technical activities under five technical committees focusing on capturing 3D live scenes, converting the captured scenes to an abstract 3D representations, transmitting the 3D visual information, displaying the 3D video, and processing of signals for the conversion of the abstract 3D video to signals needed to drive the display. The display of 3D video signals by holographic means is highly desirable. Synthesis of high-resolution computer generated holograms with high spatial frequency content, using fast algorithms, is crucial. Fresnel approximation with its fast implementations, fast superposition of zonelens terms, look-up tables using pre-computed holoprimitives are reported in the literature. Phase-retrieval methods are also under investigation. Successful solutions to this problem will benefit from proper utilization and adaptation of signal processing tools like waveletes, fresnelets, chirplets. and atomic decompositions and various optimization algorithms like matching pursuit or simulated annealing
Augmented Reality and Its Application
Augmented Reality (AR) is a discipline that includes the interactive experience of a real-world environment, in which real-world objects and elements are enhanced using computer perceptual information. It has many potential applications in education, medicine, and engineering, among other fields. This book explores these potential uses, presenting case studies and investigations of AR for vocational training, emergency response, interior design, architecture, and much more
- …