76,644 research outputs found
Using high resolution displays for high resolution cardiac data
The ability to perform fast, accurate, high resolution visualization is fundamental
to improving our understanding of anatomical data. As the volumes of data
increase from improvements in scanning technology, the methods applied to rendering
and visualization must evolve. In this paper we address the interactive display of
data from high resolution MRI scanning of a rabbit heart and subsequent histological
imaging. We describe a visualization environment involving a tiled LCD panel
display wall and associated software which provide an interactive and intuitive user
interface.
The oView software is an OpenGL application which is written for the VRJuggler
environment. This environment abstracts displays and devices away from the
application itself, aiding portability between different systems, from desktop PCs to
multi-tiled display walls. Portability between display walls has been demonstrated
through its use on walls at both Leeds and Oxford Universities. We discuss important
factors to be considered for interactive 2D display of large 3D datasets,
including the use of intuitive input devices and level of detail aspects
CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images
With the powerfulness of convolution neural networks (CNN), CNN based face
reconstruction has recently shown promising performance in reconstructing
detailed face shape from 2D face images. The success of CNN-based methods
relies on a large number of labeled data. The state-of-the-art synthesizes such
data using a coarse morphable face model, which however has difficulty to
generate detailed photo-realistic images of faces (with wrinkles). This paper
presents a novel face data generation method. Specifically, we render a large
number of photo-realistic face images with different attributes based on
inverse rendering. Furthermore, we construct a fine-detailed face image dataset
by transferring different scales of details from one image to another. We also
construct a large number of video-type adjacent frame pairs by simulating the
distribution of real video data. With these nicely constructed datasets, we
propose a coarse-to-fine learning framework consisting of three convolutional
networks. The networks are trained for real-time detailed 3D face
reconstruction from monocular video as well as from a single image. Extensive
experimental results demonstrate that our framework can produce high-quality
reconstruction but with much less computation time compared to the
state-of-the-art. Moreover, our method is robust to pose, expression and
lighting due to the diversity of data.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence, 201
Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond
Frequency Analysis of Gradient Estimators in Volume Rendering
Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary sample positions, using the derivative of the interpolation filter as the basis for the new gradient filter. As an example, we will discuss the use of the derivative of the cubic spline. Comparisons with several other methods are demonstrated. Computational efficiency can be realized since parts of the interpolation computation can be leveraged in the gradient estimatio
- …