22,958 research outputs found
A comparative study of breast surface reconstruction for aesthetic outcome assessment
Breast cancer is the most prevalent cancer type in women, and while its
survival rate is generally high the aesthetic outcome is an increasingly
important factor when evaluating different treatment alternatives. 3D scanning
and reconstruction techniques offer a flexible tool for building detailed and
accurate 3D breast models that can be used both pre-operatively for surgical
planning and post-operatively for aesthetic evaluation. This paper aims at
comparing the accuracy of low-cost 3D scanning technologies with the
significantly more expensive state-of-the-art 3D commercial scanners in the
context of breast 3D reconstruction. We present results from 28 synthetic and
clinical RGBD sequences, including 12 unique patients and an anthropomorphic
phantom demonstrating the applicability of low-cost RGBD sensors to real
clinical cases. Body deformation and homogeneous skin texture pose challenges
to the studied reconstruction systems. Although these should be addressed
appropriately if higher model quality is warranted, we observe that low-cost
sensors are able to obtain valuable reconstructions comparable to the
state-of-the-art within an error margin of 3 mm.Comment: This paper has been accepted to MICCAI201
Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Fluorescence microscopy images usually show severe anisotropy in axial versus
lateral resolution. This hampers downstream processing, i.e. the automatic
extraction of quantitative biological data. While deconvolution methods and
other techniques to address this problem exist, they are either time consuming
to apply or limited in their ability to remove anisotropy. We propose a method
to recover isotropic resolution from readily acquired anisotropic data. We
achieve this using a convolutional neural network that is trained end-to-end
from the same anisotropic body of data we later apply the network to. The
network effectively learns to restore the full isotropic resolution by
restoring the image under a trained, sample specific image prior. We apply our
method to synthetic and real datasets and show that our results improve
on results from deconvolution and state-of-the-art super-resolution techniques.
Finally, we demonstrate that a standard 3D segmentation pipeline performs on
the output of our network with comparable accuracy as on the full isotropic
data
A preliminary approach to intelligent x-ray imaging for baggage inspection at airports
Identifying explosives in baggage at airports relies on being able to characterize the materials that make up an X-ray image. If a suspicion is generated during the imaging process (step 1), the image data could be enhanced by adapting the scanning parameters (step 2). This paper addresses the first part of this problem and uses textural signatures to recognize and characterize materials and hence enabling system control. Directional Gabor-type filtering was applied to a series of different X-ray images. Images were processed in such a way as to simulate a line scanning geometry. Based on our experiments with images of industrial standards and our own samples it was found that different materials could be characterized in terms of the frequency range and orientation of the filters. It was also found that the signal strength generated by the filters could be used as an indicator of visibility and optimum imaging conditions predicted
Eye Tracker Accuracy: Quantitative Evaluation of the Invisible Eye Center Location
Purpose. We present a new method to evaluate the accuracy of an eye tracker
based eye localization system. Measuring the accuracy of an eye tracker's
primary intention, the estimated point of gaze, is usually done with volunteers
and a set of fixation points used as ground truth. However, verifying the
accuracy of the location estimate of a volunteer's eye center in 3D space is
not easily possible. This is because the eye center is an intangible point
hidden by the iris. Methods. We evaluate the eye location accuracy by using an
eye phantom instead of eyes of volunteers. For this, we developed a testing
stage with a realistic artificial eye and a corresponding kinematic model,
which we trained with {\mu}CT data. This enables us to precisely evaluate the
eye location estimate of an eye tracker. Results. We show that the proposed
testing stage with the corresponding kinematic model is suitable for such a
validation. Further, we evaluate a particular eye tracker based navigation
system and show that this system is able to successfully determine the eye
center with sub-millimeter accuracy. Conclusions. We show the suitability of
the evaluated eye tracker for eye interventions, using the proposed testing
stage and the corresponding kinematic model. The results further enable
specific enhancement of the navigation system to potentially get even better
results
- …