15,838 research outputs found
Voxelbasert 3D visualisering i OpenGL
This thesis deals with volume rendering in OpenGL and looks at the different areas at which 3D modeling and visualization is used. The thesis focuses on voxel rendering, its advantages and drawbacks, and also the implementation of such a renderer.
The aim of the thesis was to develop a voxel renderer in OpenGL from scratch, to a fully functional application that could visualize different 3D data sets generated from for example Diffpack. The data sets are scalar fields which are visualized by associating transparency and color to voxels from the values in the data sets.
There are multiple ways to visualize voxels, I have mainly used a method that uses textures mapped to a 2D plane, which are assembled into a 3D voxel set. This is a method that is supported by common 3D hardware.
To get maximum performance from the different 3D graphics cards that are available, you can use different graphics libraries. For PC’s there are two low-level graphics libraries to choose from, OpenGL and DirectX. OpenGL is developed by Silicon Graphics and are compatible with a number of different operating systems. DirectX is developed by Microsoft and is only supported in Microsoft Windows. For this thesis I chose OpenGL as the tool to use. OpenGL is a powerful software library which utilizes modern graphics hardware. Through OpenGL you get access to most of the graphic cards functions. OpenGL is however a low-level library and demands a lot of knowledge on a fundamental level to be able to visualize complex objects and scenes. I therefore take a closer look at how OpenGL works and at the theory at which it is built.
I also look at the opportunities, advantages and drawbacks with voxel rendering, and looks at the requirements which is demanded by the hardware to be able to solve the tasks efficiently. I conclude this thesis by comparing my OpenGL voxel renderer with other available voxel renderers, such as The Visualization Toolkit (VTK)
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Fast and robust curve skeletonization for real-world elongated objects
We consider the problem of extracting curve skeletons of three-dimensional,
elongated objects given a noisy surface, which has applications in agricultural
contexts such as extracting the branching structure of plants. We describe an
efficient and robust method based on breadth-first search that can determine
curve skeletons in these contexts. Our approach is capable of automatically
detecting junction points as well as spurious segments and loops. All of that
is accomplished with only one user-adjustable parameter. The run time of our
method ranges from hundreds of milliseconds to less than four seconds on large,
challenging datasets, which makes it appropriate for situations where real-time
decision making is needed. Experiments on synthetic models as well as on data
from real world objects, some of which were collected in challenging field
conditions, show that our approach compares favorably to classical thinning
algorithms as well as to recent contributions to the field.Comment: 47 pages; IEEE WACV 2018, main paper and supplementary materia
Weakly supervised 3D Reconstruction with Adversarial Constraint
Supervised 3D reconstruction has witnessed a significant progress through the
use of deep neural networks. However, this increase in performance requires
large scale annotations of 2D/3D data. In this paper, we explore inexpensive 2D
supervision as an alternative for expensive 3D CAD annotation. Specifically, we
use foreground masks as weak supervision through a raytrace pooling layer that
enables perspective projection and backpropagation. Additionally, since the 3D
reconstruction from masks is an ill posed problem, we propose to constrain the
3D reconstruction to the manifold of unlabeled realistic 3D shapes that match
mask observations. We demonstrate that learning a log-barrier solution to this
constrained optimization problem resembles the GAN objective, enabling the use
of existing tools for training GANs. We evaluate and analyze the manifold
constrained reconstruction on various datasets for single and multi-view
reconstruction of both synthetic and real images
Reconstruction of hidden 3D shapes using diffuse reflections
We analyze multi-bounce propagation of light in an unknown hidden volume and
demonstrate that the reflected light contains sufficient information to recover
the 3D structure of the hidden scene. We formulate the forward and inverse
theory of secondary and tertiary scattering reflection using ideas from energy
front propagation and tomography. We show that using careful choice of
approximations, such as Fresnel approximation, greatly simplifies this problem
and the inversion can be achieved via a backpropagation process. We provide a
theoretical analysis of the invertibility, uniqueness and choices of
space-time-angle dimensions using synthetic examples. We show that a 2D streak
camera can be used to discover and reconstruct hidden geometry. Using a 1D high
speed time of flight camera, we show that our method can be used recover 3D
shapes of objects "around the corner"
Hierarchical Surface Prediction for 3D Object Reconstruction
Recently, Convolutional Neural Networks have shown promising results for 3D
geometry prediction. They can make predictions from very little input data such
as a single color image. A major limitation of such approaches is that they
only predict a coarse resolution voxel grid, which does not capture the surface
of the objects well. We propose a general framework, called hierarchical
surface prediction (HSP), which facilitates prediction of high resolution voxel
grids. The main insight is that it is sufficient to predict high resolution
voxels around the predicted surfaces. The exterior and interior of the objects
can be represented with coarse resolution voxels. Our approach is not dependent
on a specific input type. We show results for geometry prediction from color
images, depth images and shape completion from partial voxel grids. Our
analysis shows that our high resolution predictions are more accurate than low
resolution predictions.Comment: 3DV 201
Design of a multimodal rendering system
This paper addresses the rendering of aligned regular multimodal
datasets. It presents a general framework of multimodal data fusion
that includes several data merging methods. We also analyze the
requirements of a rendering system able to provide these different
fusion methods. On the basis of these requirements, we propose a novel
design for a multimodal rendering system. The design has been
implemented and proved showing to be efficient and flexible.Postprint (published version
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
- …