49,292 research outputs found
Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration
We propose an unsupervised deep learning method for atlas based registration
to achieve segmentation and spatial alignment of the embryonic brain in a
single framework. Our approach consists of two sequential networks with a
specifically designed loss function to address the challenges in 3D first
trimester ultrasound. The first part learns the affine transformation and the
second part learns the voxelwise nonrigid deformation between the target image
and the atlas. We trained this network end-to-end and validated it against a
ground truth on synthetic datasets designed to resemble the challenges present
in 3D first trimester ultrasound. The method was tested on a dataset of human
embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed
alignment of the brain in some cases and gave insight in open challenges for
the proposed method. We conclude that our method is a promising approach
towards fully automated spatial alignment and segmentation of embryonic brains
in 3D ultrasound
A surgical system for automatic registration, stiffness mapping and dynamic image overlay
In this paper we develop a surgical system using the da Vinci research kit
(dVRK) that is capable of autonomously searching for tumors and dynamically
displaying the tumor location using augmented reality. Such a system has the
potential to quickly reveal the location and shape of tumors and visually
overlay that information to reduce the cognitive overload of the surgeon. We
believe that our approach is one of the first to incorporate state-of-the-art
methods in registration, force sensing and tumor localization into a unified
surgical system. First, the preoperative model is registered to the
intra-operative scene using a Bingham distribution-based filtering approach. An
active level set estimation is then used to find the location and the shape of
the tumors. We use a recently developed miniature force sensor to perform the
palpation. The estimated stiffness map is then dynamically overlaid onto the
registered preoperative model of the organ. We demonstrate the efficacy of our
system by performing experiments on phantom prostate models with embedded stiff
inclusions.Comment: International Symposium on Medical Robotics (ISMR 2018
Grid simulation services for the medical community
The first part of this paper presents a selection of medical simulation applications, including image reconstruction, near real-time registration for neuro-surgery, enhanced dose distribution calculation for radio-therapy, inhaled drug delivery prediction, plastic surgery planning and cardio-vascular system simulation. The latter two topics are discussed in some detail. In the second part, we show how such services can be made available to the clinical practitioner using Grid technology. We discuss the developments and experience made during the EU project GEMSS, which provides reliable, efficient, secure and lawful medical Grid services
Towards automated visual flexible endoscope navigation
Background:\ud
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud
Methods:\ud
A systematic literature search was performed using three general search terms in two medical–technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud
Results:\ud
Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud
Conclusions:\ud
Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process
3D Object Reconstruction from Hand-Object Interactions
Recent advances have enabled 3d object reconstruction approaches using a
single off-the-shelf RGB-D camera. Although these approaches are successful for
a wide range of object classes, they rely on stable and distinctive geometric
or texture features. Many objects like mechanical parts, toys, household or
decorative articles, however, are textureless and characterized by minimalistic
shapes that are simple and symmetric. Existing in-hand scanning systems and 3d
reconstruction techniques fail for such symmetric objects in the absence of
highly distinctive features. In this work, we show that extracting 3d hand
motion for in-hand scanning effectively facilitates the reconstruction of even
featureless and highly symmetric objects and we present an approach that fuses
the rich additional information of hands into a 3d reconstruction pipeline,
significantly contributing to the state-of-the-art of in-hand scanning.Comment: International Conference on Computer Vision (ICCV) 2015,
http://files.is.tue.mpg.de/dtzionas/In-Hand-Scannin
- …