7,242 research outputs found
Automated quantification and evaluation of motion artifact on coronary CT angiography images
Abstract Purpose
This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method
The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against ground‐truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from ground‐truth segmentations and the algorithm‐generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifact‐length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the ground‐truth decisions of whether the datasets were of sufficient image quality. A five‐fold cross‐validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results
The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using ground‐truth segmentations. The MAS threshold and artifact‐length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion
The Motion Artifact Quantification algorithm calculated accurate
Evaluation of Motion Artifact Metrics for Coronary CT Angiography
Purpose
This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best‐phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground‐truth motion artifact scores from a series of pairwise comparisons. Method
Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low‐Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine‐filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground‐truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground‐truth reader score. The Kendall\u27s Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. Results
On phantom images, the Kendall\u27s Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall\u27s Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall\u27s Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall\u27s Tau coefficient of 0.65. Conclusion
The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images
SPRK: A Low-Cost Stewart Platform For Motion Study In Surgical Robotics
To simulate body organ motion due to breathing, heart beats, or peristaltic
movements, we designed a low-cost, miniaturized SPRK (Stewart Platform Research
Kit) to translate and rotate phantom tissue. This platform is 20cm x 20cm x
10cm to fit in the workspace of a da Vinci Research Kit (DVRK) surgical robot
and costs $250, two orders of magnitude less than a commercial Stewart
platform. The platform has a range of motion of +/- 1.27 cm in translation
along x, y, and z directions and has motion modes for sinusoidal motion and
breathing-inspired motion. Modular platform mounts were also designed for
pattern cutting and debridement experiments. The platform's positional
controller has a time-constant of 0.2 seconds and the root-mean-square error is
1.22 mm, 1.07 mm, and 0.20 mm in x, y, and z directions respectively. All the
details, CAD models, and control software for the platform is available at
github.com/BerkeleyAutomation/sprk
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Robust automatic target tracking based on a Bayesian ego-motion compensation framework for airborne FLIR imagery
Automatic target tracking in airborne FLIR imagery is currently a challenge due to the camera ego-motion. This phenomenon distorts the spatio-temporal correlation of the video sequence, which dramatically reduces the tracking performance. Several works address this problem using ego-motion compensation strategies. They use a deterministic approach to compensate the camera motion assuming a specific model of geometric transformation. However, in real sequences a specific geometric transformation can not accurately describe the camera ego-motion for the whole sequence, and as consequence of this, the performance of the tracking stage can significantly decrease, even completely fail. The optimum transformation for each pair of consecutive frames depends on the relative depth of the elements that compose the scene, and their degree of texturization. In this work, a novel Particle Filter framework is proposed to efficiently manage several hypothesis of geometric transformations: Euclidean, affine, and projective. Each type of transformation is used to compute candidate locations of the object in the current frame. Then, each candidate is evaluated by the measurement model of the Particle Filter using the appearance information. This approach is able to adapt to different camera ego-motion conditions, and thus to satisfactorily perform the tracking. The proposed strategy has been tested on the AMCOM FLIR dataset, showing a high efficiency in the tracking of different types of targets in real working conditions
Aggregated motion estimation for real-time MRI reconstruction
Real-time magnetic resonance imaging (MRI) methods generally shorten the
measuring time by acquiring less data than needed according to the sampling
theorem. In order to obtain a proper image from such undersampled data, the
reconstruction is commonly defined as the solution of an inverse problem, which
is regularized by a priori assumptions about the object. While practical
realizations have hitherto been surprisingly successful, strong assumptions
about the continuity of image features may affect the temporal fidelity of the
estimated images. Here we propose a novel approach for the reconstruction of
serial real-time MRI data which integrates the deformations between nearby
frames into the data consistency term. The method is not required to be affine
or rigid and does not need additional measurements. Moreover, it handles
multi-channel MRI data by simultaneously determining the image and its coil
sensitivity profiles in a nonlinear formulation which also adapts to
non-Cartesian (e.g., radial) sampling schemes. Experimental results of a motion
phantom with controlled speed and in vivo measurements of rapid tongue
movements demonstrate image improvements in preserving temporal fidelity and
removing residual artifacts.Comment: This is a preliminary technical report. A polished version is
published by Magnetic Resonance in Medicine. Magnetic Resonance in Medicine
201
- …