17,914 research outputs found
Learning Deep Similarity Metric for 3D MR-TRUS Registration
Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance
(MR) images for guiding targeted prostate biopsy has significantly improved the
biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image
registration. However, it is very challenging to obtain a robust automatic
MR-TRUS registration due to the large appearance difference between the two
imaging modalities. The work presented in this paper aims to tackle this
problem by addressing two challenges: (i) the definition of a suitable
similarity metric and (ii) the determination of a suitable optimization
strategy.
Methods: This work proposes the use of a deep convolutional neural network to
learn a similarity metric for MR-TRUS registration. We also use a composite
optimization strategy that explores the solution space in order to search for a
suitable initialization for the second-order optimization of the learned
metric. Further, a multi-pass approach is used in order to smooth the metric
for optimization.
Results: The learned similarity metric outperforms the classical mutual
information and also the state-of-the-art MIND feature based methods. The
results indicate that the overall registration framework has a large capture
range. The proposed deep similarity metric based approach obtained a mean TRE
of 3.86mm (with an initial TRE of 16mm) for this challenging problem.
Conclusion: A similarity metric that is learned using a deep neural network
can be used to assess the quality of any given image registration and can be
used in conjunction with the aforementioned optimization framework to perform
automatic registration that is robust to poor initialization.Comment: To appear on IJCAR
Anatomically Constrained Video-CT Registration via the V-IMLOP Algorithm
Functional endoscopic sinus surgery (FESS) is a surgical procedure used to
treat acute cases of sinusitis and other sinus diseases. FESS is fast becoming
the preferred choice of treatment due to its minimally invasive nature.
However, due to the limited field of view of the endoscope, surgeons rely on
navigation systems to guide them within the nasal cavity. State of the art
navigation systems report registration accuracy of over 1mm, which is large
compared to the size of the nasal airways. We present an anatomically
constrained video-CT registration algorithm that incorporates multiple video
features. Our algorithm is robust in the presence of outliers. We also test our
algorithm on simulated and in-vivo data, and test its accuracy against
degrading initializations.Comment: 8 pages, 4 figures, MICCA
A new mini-navigation tool allows accurate component placement during anterior total hip arthroplasty.
Introduction: Computer-assisted navigation systems have been explored in total hip arthroplasty (THA) to improve component positioning. While these systems traditionally rely on anterior pelvic plane registration, variances in soft tissue thickness overlying anatomical landmarks can lead to registration error, and the supine coronal plane has instead been proposed. The purpose of this study was to evaluate the accuracy of a novel navigation tool, using registration of the anterior pelvic plane or supine coronal plane during simulated anterior THA.
Methods: Measurements regarding the acetabular component position, and changes in leg length and offset were recorded. Benchtop phantoms and target measurement values commonly seen in surgery were used for analysis. Measurements for anteversion and inclination, and changes in leg length and offset were recorded by the navigation tool and compared with the known target value of the simulation. Pearson\u27s
Results: The device accurately measured cup position and leg length measurements to within 1Β° and 1 mm of the known target values, respectively. Across all simulations, there was a strong, positive relationship between values obtained by the device and the known target values (
Conclusion: The preliminary findings of this study suggest that the novel navigation tool tested is a potentially viable tool to improve the accuracy of component placement during THA using the anterior approach
An open environment CT-US fusion for tissue segmentation during interventional guidance.
Therapeutic ultrasound (US) can be noninvasively focused to activate drugs, ablate tumors and deliver drugs beyond the blood brain barrier. However, well-controlled guidance of US therapy requires fusion with a navigational modality, such as magnetic resonance imaging (MRI) or X-ray computed tomography (CT). Here, we developed and validated tissue characterization using a fusion between US and CT. The performance of the CT/US fusion was quantified by the calibration error, target registration error and fiducial registration error. Met-1 tumors in the fat pads of 12 female FVB mice provided a model of developing breast cancer with which to evaluate CT-based tissue segmentation. Hounsfield units (HU) within the tumor and surrounding fat pad were quantified, validated with histology and segmented for parametric analysis (fat: -300 to 0 HU, protein-rich: 1 to 300 HU, and bone: HU>300). Our open source CT/US fusion system differentiated soft tissue, bone and fat with a spatial accuracy of βΌ1 mm. Region of interest (ROI) analysis of the tumor and surrounding fat pad using a 1 mm(2) ROI resulted in mean HU of 68Β±44 within the tumor and -97Β±52 within the fat pad adjacent to the tumor (p<0.005). The tumor area measured by CT and histology was correlated (r(2)β=β0.92), while the area designated as fat decreased with increasing tumor size (r(2)β=β0.51). Analysis of CT and histology images of the tumor and surrounding fat pad revealed an average percentage of fat of 65.3% vs. 75.2%, 36.5% vs. 48.4%, and 31.6% vs. 38.5% for tumors <75 mm(3), 75-150 mm(3) and >150 mm(3), respectively. Further, CT mapped bone-soft tissue interfaces near the acoustic beam during real-time imaging. Combined CT/US is a feasible method for guiding interventions by tracking the acoustic focus within a pre-acquired CT image volume and characterizing tissues proximal to and surrounding the acoustic focus
Relational Reasoning Network (RRN) for Anatomical Landmarking
Accurately identifying anatomical landmarks is a crucial step in deformation
analysis and surgical planning for craniomaxillofacial (CMF) bones. Available
methods require segmentation of the object of interest for precise landmarking.
Unlike those, our purpose in this study is to perform anatomical landmarking
using the inherent relation of CMF bones without explicitly segmenting them. We
propose a new deep network architecture, called relational reasoning network
(RRN), to accurately learn the local and the global relations of the landmarks.
Specifically, we are interested in learning landmarks in CMF region: mandible,
maxilla, and nasal bones. The proposed RRN works in an end-to-end manner,
utilizing learned relations of the landmarks based on dense-block units and
without the need for segmentation. For a given a few landmarks as input, the
proposed system accurately and efficiently localizes the remaining landmarks on
the aforementioned bones. For a comprehensive evaluation of RRN, we used
cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system
identifies the landmark locations very accurately even when there are severe
pathologies or deformations in the bones. The proposed RRN has also revealed
unique relationships among the landmarks that help us infer several reasoning
about informativeness of the landmark points. RRN is invariant to order of
landmarks and it allowed us to discover the optimal configurations (number and
location) for landmarks to be localized within the object of interest
(mandible) or nearby objects (maxilla and nasal). To the best of our knowledge,
this is the first of its kind algorithm finding anatomical relations of the
objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table
Image-guided port placement for minimally invasive cardiac surgery
Minimally invasive surgery is becoming popular for a number of interventions. Use of robotic surgical systems in coronary artery bypass intervention offers many benefits to patients, but is however limited by remaining challenges in port placement. Choosing the entry ports for the robotic tools has a large impact on the outcome of the surgery, and can be assisted by pre-operative planning and intra-operative guidance techniques. In this thesis, pre-operative 3D computed tomography (CT) imaging is used to plan minimally invasive robotic coronary artery bypass (MIRCAB) surgery. From a patient database, port placement optimization routines are implemented and validated. Computed port placement configurations approximated past expert chosen configurations with an error of 13.7 Β±5.1 mm. Following optimization, statistical classification was used to assess patient candidacy for MIRCAB. Various pattern recognition techniques were used to predict MIRCAB success, and could be used in the future to reduce conversion rates to conventional open-chest surgery. Gaussian, Parzen window, and nearest neighbour classifiers all proved able to detect βcandidateβ and βnon-candidateβ MIRCAB patients. Intra-operative registration and laser projection of port placements was validated on a phantom and then evaluated in four patient cases. An image-guided laser projection system was developed to map port placement plans from pre-operative 3D images. Port placement mappings on the phantom setup were accurate with an error of 2.4 Β± 0.4 mm. In the patient cases, projections remained within 1 cm of computed port positions. Misregistered port placement mappings in human trials were due mainly to the rigid-body registration assumption and can be improved by non-rigid techniques. Overall, this work presents an integrated approach for: 1) pre-operative port placement planning and classification of incoming MIRCAB patients; and 2) intra-operative guidance of port placement. Effective translation of these techniques to the clinic will enable MIRCAB as a more efficacious and accessible procedure
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
λ‘λ΄μ μ΄μ©ν μμ¨μ νμ 골μ±μ·¨ 골μ λ¨μ μ κΈ°μ΄λ°©λ² κ°λ°κ³Ό κ·Έ μ νλ νκ°
νμλ
Όλ¬Έ (λ°μ¬)-- μμΈλνκ΅ λνμ : μΉμνλνμ μΉμκ³Όνκ³Ό, 2019. 2. κΉμ±λ―Ό.Objectives: An autonomous robot osteotomy system using direct coordinate determination was developed in our study. The registration accuracy was evaluated by measuring the fiducial localization error (FLE) and target registration error (TRE) and the accuracy of the designed osteotomy method along a preprogrammed plan was evaluated. Furthermore, the accuracy of the robotic osteotomy and a manual osteotomy was compared in regard to cut position, length, angle and depth.
Methods: A light-weight-robot was used in this study, with an electric gripper. A direct coordinate determination method, using three points on the teeth, was developed for registration and determination of FLE and TRE, as measured on a mandible model. Sixteen landmarks on the mandible were prepared with holes and zirconia beads and the TRE was computed in ten repeated measurements using the robot. A direct coordinate determination via three points was used for registering and a twenty stone model (7 cm x 7 cm x 3 cm). The osteotomy line was designed similar to the ramal bone graft (2 cm x 1 cm x 0.5 cm). To evaluate accuracy, we measured a position (how accurate the robot arm is located), length (how accurate the robot arm is moving while cutting), angle (the angle at which the robot arm is located), and depth (the depth of the disc cutting) error. Sixteen mandible phantoms were used to simulate the osteotomy for the ramus bone graft. An image of the phantom was obtained by three-dimensional camera scanning and a virtual ramal bone graft was designed with computer software. To evaluate an accuracy and precision, the mandible phantoms were scanned with cone beam computer tomography (CBCT). Cut position, length, angle and depth errors were measured and the results of the robotic surgery were compared with that of manual surgery.
Results: The mean value of the FLE was 0.84 Β± 0.38 mm and the third reference point which detected the lingual fossa of the right second molar had a larger error than the other reference points. The mean value of the TRE was 1.69 Β± 0.82 mm and there were significant differences between the anterior body, posterior body, and coronoid/condyle groups. Landmarks at the anterior body had the lowest TRE (0.96 Β± 0.47 mm) and landmarks on the coronoid and condyle had the highest TRE (2.12 Β± 0.99 mm). An autonomous robot osteotomy with a direct coordinate determination using three points was successfully achieved. On the model RBG osteotomy, the posterior cut had 0.77Β±0.32 absolute mean value, the anterior cut had 0.82Β±0.43, the inferior cut had 0.76 Β± 0.38 and the superior cut had 1.37 Β± 0.83, respectively. The absolute mean values for osteotomy errors for position, length, angle, and depth were 0.93 Β± 0.45 mm, 0.81 Β± 0.34 mm, 1.26 Β± 1.35Β°, and 1.19 Β± 0.73 mm, respectively. The position and length errors were significantly lower than angle and depth errors. In the comparison between robotic surgery and manual surgery, there were significant differences of absolute mean value and variance in all categories. For the robotic surgery, the cut position, length, angle and depth errors were 0.70 Β± 0.34 mm, 0.35 Β± 0.19 mm, 1.32 Β± 0.96Β° and 0.59 Β± 0.46 mm, respectively. For the manual surgery, the cut position, length, angle and depth errors were 1.83 Β± 0.65 mm, 0.62 Β± 0.37 mm, 5.96 Β± 3.47Β° and 0.40 Β± 0.31 mm, respectively. The robotic surgery had significantly higher accuracy and lower variance for cut position, length and angle errors. On the other hand, the depth error had a significantly higher absolute mean value and variance than the robotic surgery.
Conclusions: An autonomous robot osteotomy scheme was developed, using the direct coordinate determination by three points on the teeth, and proved an accurate method for registration. The incisal edge or buccal pit of the teeth were more proper reference points than the fossa of the teeth. The measured RMS of the TRE increased when the target moved away from the reference points. Robotic surgery showed high accuracy and precision in positioning and reduced accuracy in controlling the depth of disc sawing. The robotic surgery showed high accuracy and precision in positioning and somewhat low accuracy in controlling the depth of the disc sawing. Comparing robotic and manual surgeries, the robotic surgery was superior in accuracy and precision in position, length and angle. However, the manual surgery had higher accuracy and precision in depth.1. λͺ© μ
λ³Έ μ°κ΅¬μμλ μΈ μ μ μ΄μ ν΅ν μ’ν κ²°μ λ°©μμ ν΅ν΄ μ€μ λͺ¨λΈμ μ’νμ λ‘λ΄μ΄ κ°μ§κ³ μλ μ’νλ₯Ό μ ν©νλ λ°©μμ μ΄μ©νμ¬ μμ¨ λ‘λ΄μ μ΄μ©ν νμ
골μ±μ·¨ 골μ λ¨μ μ κΈ°μ΄λ°©λ²μ κ°λ°νκ³ μ νλ€. κ°λ°λ μ ν© λ°©λ²μ μμΉ μΆμ μ€λ₯ (fiducial localization error)μ λͺ©ν μ ν© μ€λ₯ (target registration error)λ₯Ό μΈ‘μ νμ¬ μ ν©μ μ νμ±μ νκ°νκ³ μ νλ€. λν μ¬μ μ νλ‘κ·Έλλ°λ 골μ λ¨μ μ§μ‘면체 λͺ¨λΈμ μννκ³ μμΉ, κΈΈμ΄, κ°λ, κΉμ΄μ μ€λ₯λ₯Ό μΈ‘μ νμ¬ μ νμ±μ μμλ³΄κ³ μ νλ€. μΆκ°μ μΌλ‘ 3μ°¨μ κ°μμμ μ ν΅ν΄ νμ
μνμ§ κ³¨μ΄μμ (ramal bone graft)μ μ€κ³νκ³ νμ
ν¬ν
λͺ¨νμμ μ΄μ λ§κ² μμ¨ λ‘λ΄μ΄ 골μ λ¨μ μ μννμ¬ μ
골μμ μμ΄μ λ‘λ΄μ μ΄μ©ν 골μ λ¨μ μ μ νμ±μ νκ°ν΄ λ³΄κ³ λ°λμΈ‘μ λμ‘°κ΅°μΌλ‘ μΈκ³Όμκ° κΈ°μ‘΄μ μ ν΅μ μΈ λ°©μμΌλ‘ 골μ λ¨μ μ μνν¨μΌλ‘μ¨ μμΈ‘μ λΉκ΅νκ³ μ νλ€.
2. λ°© λ²
λ³Έ μ°κ΅¬μμλ κ²½λ λ‘λ΄μ μ΅μ’
μμ©μ²΄(end effector)μ μ μ 그리νΌ(gripper)λ₯Ό μ°κ²°νκ³ μ΄ κ·Έλ¦¬νΌκ° μμ μ© μ μ기ꡬλ λμ€ν¬κ° μ°κ²°λ μΉκ³Όμ© νΈλνΌμ€λ₯Ό μ‘κ³ κ³¨μ λ¨μ μννλλ‘ νμλ€. μ€μ λͺ¨λΈμ μ’νμ λ‘λ΄μ΄ κ°μ§κ³ μλ μ’νλ₯Ό μ€μ²©νκΈ° μν΄ μΈ μ μ μ°μ΄ 첫λ²μ§Έ μ μ μμ μΌλ‘ νκ³ , λλ²μ§Έ μ μ λ°©ν₯μ xμΆμΌλ‘, κ·Έλ¦¬κ³ μΈ λ²μ§Έ μ μ΄ κ²°μ νλ νλ©΄μ xy νλ©΄μΌλ‘ μΈμνλλ‘ νμλ€. 첫λ²μ§Έ μ€νμμλ μμΉ μΆμ μ€λ₯μ λͺ©ν μ ν© μ€λ₯μ νκ°λ₯Ό μν΄ νμ
골 λͺ¨λΈμ μΉμμ κΈ°μ€ μΈ μ κ³Ό νμ
골μ μ΄ 16κ°μ λͺ©ν μμΉμ 1mm ꡬλ©μ λ«κ³ 1mm μ§λ¦μ μ§λ₯΄μ½λμ ꡬλ₯Ό μ μ©νμ¬ CBCT μμμ μ λ³΄μΌ μ μλλ‘ νμλ€. κ° λͺ©ν μμΉμ 10λ²μ© λ°λ³΅νμ¬ μμΉλ₯Ό μΈμνμ¬ μ€λ₯λ₯Ό κ³μ°νκ³ λͺ©ν μ ν© μ€λ₯μ μμΉλ³ μ°¨μ΄λ₯Ό λΆμνμλ€. λλ²μ§Έ μ€νμμλ μ΄ 20 κ°μ μ§μ‘면체 μκ³ λͺ¨λΈ (7cm x 7cm x 3cm)μ μ μνμκ³ μκ³ μ μ λ¨ ν¬κΈ°λ νμ
μνμ§ κ³¨μ±μ·¨μ μν 골μ λ¨ ν¬κΈ° (2cm x 1cm x 0.5cm)μ λμΌνκ² μ€κ³νμλ€. λ‘λ΄νμ μ΄μ©νμ¬ 3μ μ μ΄μ νλ©΄ μ’νκ°μ κ³μ°νμ¬ λ―Έλ¦¬ νλ‘κ·Έλλ°λ μμΉμμ 골μ λ¨μ μννμλ€. λ‘λ΄μ μν΄ μνλ μκ³ μ λ¨μ μ μμΉ, κΈΈμ΄ κ°λ λ° κΉμ΄λ‘ λλμ΄ μ€λ₯λ₯Ό μΈ‘μ νμλ€. μΈλ²μ§Έ μ€νμμλ νμ
μνμ§ κ³¨μ±μ·¨λ₯Ό μν 골μ λ¨ μ€νμ μν΄ μ΄ 16κ°μ νμ
ν¬ν
λͺ¨νμ μ¬μ©νμλ€. ν¬ν
λͺ¨νμ μΌμ°¨μ μ€μΊλμΌλ‘ μΌμ°¨μ μμμ μ»κ³ κ°μ μμ μ μννμ¬ κ³¨μ λ¨ ν¬κΈ°μ νν κ·Έλ¦¬κ³ κ·Έ μμΉμ λν κ³νμ μΈμ λ€. μ΄ κ°μ μμ κ³νμ λ°λΌ λ‘λ΄μ΄ ν¬ν
λͺ¨λΈμ 골μ λ¨ μμ μ νμλ€. λ°λ μΈ‘μ λμ‘°κ΅°μΌλ‘ κΈ°μ‘΄μ μ ν΅μ μΈ λ°©μμΌλ‘ μΈκ³Όμκ° μννμ¬ μμΈ‘μ μ€μ°¨λ₯Ό λΉκ΅νμλ€. μ λ¨μ μ μμΉ, κΈΈμ΄, κ°λ λ° κΉμ΄λ₯Ό μΈ‘μ νμ¬ κ°κ°μ μ νλλ₯Ό λΉκ΅νμλ€. μμΉ μ€λ₯λ xμΆμΌλ‘λ λ‘λ΄μ΄ νλ©΄ μ μ΄μ μΈμνκ³ κ³¨μ λ¨μ μννκΈ°μ 0μ κ°μΌλ‘ μΈ‘μ λμκ³ y μΆκ³Ό z μΆμΌλ‘ λλμ΄ μΈ‘μ λμμΌλ©° νκ· κ°κ³Ό μ κ³±νκ· μ κ³±κ·Όλ₯Ό κ³μ°νμλ€.
3. κ²° κ³Ό
μμΉ μΆμ μ€λ₯μ λͺ©ν μ ν© μ€λ₯λ κ°κ° 0.49Β±0.22 mm μ 0.98Β±0.47 mmλ‘ μΈ‘μ λμμΌλ©° κΈ°μ€μ μμ λ©μ΄μ§μλ‘ λͺ©ν μ ν© μ€λ₯λ λ ν° κ°μ 보μλ€. μκ³ λͺ¨λΈ μ€νμμ μ λ¨μ μ μμΉ, κΈΈμ΄, κ°λ λ° κΉμ΄μ νκ· κ³Ό νμ€μ€μ°¨λ κ°κ° 0.93 Β± 0.45 mm, 0.81 Β± 0.34 mm, 1.26 Β± 1.35Β°, 1.19 Β± 0.73 mm μ΄μλ€. μμΉκ° κ°μ₯ μ νν κ°μ 보μμΌλ©° κΈΈμ΄ κ·Έλ¦¬κ³ κΉμ΄ μμΌλ‘ μ€μ°¨κ° μ¦κ°νμμΌλ©°, κ°λμ μ λ¨ κΉμ΄ μ μ΄κ° κ°μ₯ μ€μ°¨κ° λ§μ μ μμ΄μλ€. νμ
ν¬ν
μμ μμ λ‘λ΄μ μ΄μ©ν 골μ λ¨μ μμΉ, κΈΈμ΄, κ°λ λ° κΉμ΄ μ€μ°¨ κ°μ κ°κ° 0.70 Β± 0.34 mm, 0.35 Β± 0.19 mm, 1.32 Β± 0.96Β°, 0.59 Β± 0.46 mm μμΌλ©° μΈκ³Όμμ 골μ λ¨μμλ κ°μ΄ κ°κ° 1.83 Β± 0.65 mm, 0.62 Β± 0.37 mm, 5.96 Β± 3.47Β°, 0.40 Β± 0.31 mm μλ€. μμΉ, κΈΈμ΄, κ°λ μ€μ°¨λ λ‘λ΄μ΄ λ μμ κ°μ 보μκ³ κΉμ΄ μ€μ°¨λ μΈκ³Όμμ μμ μμ λ μμ κ°μ 보μλ€.
4. κ²° λ‘
λ³Έ μ°κ΅¬μμλ νμ
μνμ§ κ³¨μ±μ·¨λ₯Ό μν μμ¨ λ‘λ΄μ μ΄μ©ν 골μ λ¨ μμ€ν
μ κ°λ°νμκ³ μμΉμΆμ μ€λ₯μ λͺ©νμ ν©μ€λ₯ λͺ¨λ μ°μν κ°μ 보μλ€. μκ³ λͺ¨νκ³Ό νμ
ν¬ν
λͺ¨ν₯μ μ΄μ©ν λκ°μ§ μ€ν λͺ¨λμμ μ μ©μ±κ³Ό ν₯μλ μ νμ±μ νμΈν μ μμλ€. μΈμ μ μ΄ μ’ν κ²°μ μμ€ν
μ μ€μ λͺ¨λΈμ μ’νλ₯Ό λ‘λ΄μ μ’νλ‘ λ±λ‘νλ λ° μ μ©ν μμ€ν
μ΄μμΌλ©°, νμ
μνμ§ κ³¨μ λ¨μ μ λν μμ¨λ‘λ΄ μμ€ν
μ μ νλλ κΈ°μ‘΄μ μΈκ³Όμκ° μ§μ μννλ λ°©μλ³΄λ€ μ°μνμλ€.Abstract (in English)
1. Introduction 1
2. Materials and Methods. 12
3. Results 26
4. Discussion 32
5. Conclusions 40
6. References 41
Tables and Figures 48
Abstract (in Korean) 74Docto
- β¦