168 research outputs found
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 Ă— 384 3D-Pixels whereas the traditional spatial resolution is 341 Ă— 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
On the Hardware/Software Design and Implementation of a High Definition Multiview Video Surveillance System
published_or_final_versio
Virtual camera synthesis for soccer game replays
International audienceIn this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays
Methods for reducing visual discomfort in stereoscopic 3D: A review
This work was supported by the EPSRC Grant EP/M01469X/1, “Geometric Evaluation of Stereoscopic Video”
Model based 3D vision synthesis and analysis for production audit of installations.
PhDOne of the challenging problems in the aerospace industry is to design
an automated 3D vision system that can sense the installation components
in an assembly environment and check certain safety constraints
are duly respected. This thesis describes a concept application to aid
a safety engineer to perform an audit of a production aircraft against
safety driven installation requirements such as segregation, proximity,
orientation and trajectory. The capability is achieved using the
following steps. The initial step is to perform image capture of a
product and measurement of distance between datum points within
the product with/without reference to a planar surface. This provides
the safety engineer a means to perform measurements on a set of captured
images of the equipment they are interested in. The next step is
to reconstruct the digital model of fabricated product by using multiple
captured images to reposition parts according to the actual model.
Then, the projection onto the 3D digital reconstruction of the safety
related installation constraints, respecting the original intent of the
constraints that are defined in the digital mock up is done. The differences
between the 3D reconstruction of the actual product and the
design time digital mockup of the product are identified. Finally, the
differences/non conformances that have a relevance to safety driven
installation requirements with reference to the original safety requirement
intent are identified. The above steps together give the safety engineer
the ability to overlay a digital reconstruction that should be as
true to the fabricated product as possible so that they can see how the
product conforms or doesn't conform to the safety driven installation
requirements. The work has produced a concept demonstrator that
will be further developed in future work to address accuracy, work flow
and process efficiency. A new depth based segmentation technique
GrabcutD which is an improvement to existing Grabcut, a graph cut
based segmentation method is proposed. Conventional Grabcut relies
only on color information to achieve segmentation. However, in stereo
or multiview analysis, there is additional information that could be
also used to improve segmentation. Clearly, depth based approaches
bear the potential discriminative power of ascertaining whether the
object is nearer of farer. We show the usefulness of the approach when
stereo information is available and evaluate it using standard datasets
against state of the art result
A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging
Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools.
In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering.
In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract
Foreword and Acknowledgements
Overview and Contributions
Part 1 - Introduction
1 Fluorescence Microscopy
2 Introduction to Visual Processing
3 A Short Introduction to Cross Reality
4 Eye Tracking and Gaze-based Interaction
Part 2 - VR and AR for System Biology
5 scenery — VR/AR for Systems Biology
6 Rendering
7 Input Handling and Integration of External Hardware
8 Distributed Rendering
9 Miscellaneous Subsystems
10 Future Development Directions
Part III - Case Studies
C A S E S T U D I E S
11 Bionic Tracking: Using Eye Tracking for Cell Tracking
12 Towards Interactive Virtual Reality Laser Ablation
13 Rendering the Adaptive Particle Representation
14 sciview — Integrating scenery into ImageJ2 & Fiji
Part IV - Conclusion
15 Conclusions and Outlook
Backmatter & Appendices
A Questionnaire for VR Ablation User Study
B Full Correlations in VR Ablation Questionnaire
C Questionnaire for Bionic Tracking User Study
List of Tables
List of Figures
Bibliography
Selbstständigkeitserklärun
Recommended from our members
End-to-end 3D video communication over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Three-dimensional technology, more commonly referred to as 3D technology, has revolutionised many fields including entertainment, medicine, and communications to name a few. In addition to 3D films, games, and sports channels, 3D perception has made tele-medicine a reality. By the year 2015, 30% of the all HD panels at home will be 3D enabled, predicted by consumer electronics manufacturers. Stereoscopic cameras, a comparatively mature technology compared to other 3D systems, are now being used by ordinary citizens to produce 3D content and share at a click of a button just like they do with the 2D counterparts via sites like YouTube. But technical challenges still exist, including with autostereoscopic multiview displays. 3D content requires many complex considerations--including how to represent it, and deciphering what is the best compression format--when considering transmission or storage, because of its increased amount of data. Any decision must be taken in the light of the available bandwidth or storage capacity, quality and user expectations. Free viewpoint navigation also remains partly unsolved. The most pressing issue getting in the way of widespread uptake of consumer 3D systems is the ability to deliver 3D content to heterogeneous consumer displays over the heterogeneous networks. Optimising 3D video communication solutions must consider the entire pipeline, starting with optimisation at the video source to the end display and transmission optimisation. Multi-view offers the most compelling solution for 3D videos with motion parallax and freedom from wearing headgear for 3D video perception. Optimising multi-view video for delivery and display could increase the demand for true 3D in the consumer market. This thesis focuses on an end-to-end quality optimisation in 3D video communication/transmission, offering solutions for optimisation at the compression, transmission, and decoder levels.Brunel University - Isambard Research Scholarshi
- …