377 research outputs found

    Relational Reasoning Network (RRN) for Anatomical Landmarking

    Full text link
    Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for craniomaxillofacial (CMF) bones. Available methods require segmentation of the object of interest for precise landmarking. Unlike those, our purpose in this study is to perform anatomical landmarking using the inherent relation of CMF bones without explicitly segmenting them. We propose a new deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations of the landmarks. Specifically, we are interested in learning landmarks in CMF region: mandible, maxilla, and nasal bones. The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units and without the need for segmentation. For a given a few landmarks as input, the proposed system accurately and efficiently localizes the remaining landmarks on the aforementioned bones. For a comprehensive evaluation of RRN, we used cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system identifies the landmark locations very accurately even when there are severe pathologies or deformations in the bones. The proposed RRN has also revealed unique relationships among the landmarks that help us infer several reasoning about informativeness of the landmark points. RRN is invariant to order of landmarks and it allowed us to discover the optimal configurations (number and location) for landmarks to be localized within the object of interest (mandible) or nearby objects (maxilla and nasal). To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table

    Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images

    Full text link
    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.Comment: This paper was published and presented in SPIE Medical Imaging 201

    'Direct DICOM slice landmarking' a novel research technique to quantify skeletal changes in orthognathic surgery

    Get PDF
    The limitations of the current methods of quantifying the surgical movements of facial bones inspired this study. The aim of this study was the assessment of the accuracy and reproducibility of directly landmarking of 3D DICOM images (Digital Imaging and Communications in Medicine) to quantify the changes in the jaw bones following surgery. The study was carried out on plastic skull to simulate the surgical movements of the jaw bones. Cone beam CT scans were taken at 3mm, 6mm, and 9mm maxillary advancement; together with a 2mm, 4mm, 6mm and 8mm “down graft” which in total generated 12 different positions of the maxilla for the analysis. The movements of the maxilla were calculated using two methods, the standard approach where distances between surface landmarks on the jaw bones were measured and the novel approach where measurements were taken directly from the internal structures of the corresponding 3D DICOME slices. A one sample t-test showed that there was no statistically significant difference between the two methods of measurements for the y and z directions, however, the x direction showed a significant difference. The mean difference between the two absolute measurements were 0.34±0.20mm, 0.22±0.16mm, 0.18±0.13mm in the y, z and x directions respectively. In conclusion, the direct landmarking of 3D DICOM image slices is a reliable, reproducible and informative method for assessment of the 3D skeletal changes. The method has a clear clinical application which includes the analysis of the jaw movements “orthognathic surgery” for the correction of facial deformities

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: systematic review and meta-analysis

    Get PDF
    Objectives The aim of the present systematic review and meta-analysis is to assess the accuracy of automated landmarking using deep learning in comparison with manual tracing for cephalometric analysis of 3D medical images. Methods PubMed/Medline, IEEE Xplore, Scopus and ArXiv electronic databases were searched. Selection criteria were: ex vivo and in vivo volumetric data images suitable for 3D landmarking (Problem), a minimum of five automated landmarking performed by deep learning method (Intervention), manual landmarking (Comparison), and mean accuracy, in mm, between manual and automated landmarking (Outcome). QUADAS-2 was adapted for quality analysis. Meta-analysis was performed on studies that reported as outcome mean values and standard deviation of the difference (error) between manual and automated landmarking. Linear regression plots were used to analyze correlations between mean accuracy and year of publication. Results The initial electronic screening yielded 252 papers published between 2020 and 2022. A total of 15 studies were included for the qualitative synthesis, whereas 11 studies were used for the meta-analysis. Overall random effect model revealed a mean value of 2.44 mm, with a high heterogeneity (I-2 = 98.13%, tau(2) = 1.018, p-value < 0.001); risk of bias was high due to the presence of issues for several domains per study. Meta-regression indicated a significant relation between mean error and year of publication (p value = 0.012). Conclusion Deep learning algorithms showed an excellent accuracy for automated 3D cephalometric landmarking. In the last two years promising algorithms have been developed and improvements in landmarks annotation accuracy have been done

    Fully automated landmarking and facial segmentation on 3D photographs

    Full text link
    Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.Comment: 13 pages, 4 figures, 7 tables, repository https://github.com/rumc3dlab/3dlandmarkdetection

    Articulated Statistical Shape Modelling of the Shoulder Joint

    Get PDF
    The shoulder joint is the most mobile and unstable joint in the human body. This makes it vulnerable to soft tissue pathologies and dislocation. Insight into the kinematics of the joint may enable improved diagnosis and treatment of different shoulder pathologies. Shoulder joint kinematics can be influenced by the articular geometry of the joint. The aim of this project was to develop an analysis framework for shoulder joint kinematics via the use of articulated statistical shape models (ASSMs). Articulated statistical shape models extend conventional statistical shape models by combining the shape variability of anatomical objects collected from different subjects (statistical shape models), with the physical variation of pose between the same objects (articulation). The developed pipeline involved manual annotation of anatomical landmarks selected on 3D surface meshes of scapulae and humeri and establishing dense surface correspondence across these data through a registration process. The registration was performed using a Gaussian process morphable model fitting approach. In order to register two objects separately, while keeping their shape and kinematics relationship intact, one of the objects (scapula) was fixed leaving the other (humerus) to be mobile. All the pairs of registered humeri and scapulae were brought back to their native imaged position using the inverse of the associated registration transformation. The glenohumeral rotational center and local anatomic coordinate system of the humeri and scapulae were determined using the definitions suggested by the International Society of Biomechanics. Three motions (flexion, abduction, and internal rotation) were generated using Euler angle sequences. The ASSM of the model was built using principal component analysis and validated. The validation results show that the model adequately estimated the shape and pose encoded in the training data. Developing ASSM of the shoulder joint helps to define the statistical shape and pose parameters of the gleno humeral articulating surfaces. An ASSM of the shoulder joint has potential applications in the analysis and investigation of population-wide joint posture variation and kinematics. Such analyses may include determining and quantifying abnormal articulation of the joint based on the range of motion; understanding of detailed glenohumeral joint function and internal joint measurement; and diagnosis of shoulder pathologies. Future work will involve developing a protocol for encoding the shoulder ASSM with real, rather than handcrafted, pose variation

    3D approximation of scapula bone shape from 2D X-ray images using landmark-constrained statistical shape model fitting

    Get PDF
    Two-dimensional X-ray imaging is the dominant imaging modality in low-resource countries despite the existence of three-dimensional (3D) imaging modalities. This is because fewer hospitals in low-resource countries can afford the 3D imaging systems as their acquisition and operation costs are higher. However, 3D images are desirable in a range of clinical applications, for example surgical planning. The aim of this research was to develop a tool for 3D approximation of scapula bone from 2D X-ray images using landmark-constrained statistical shape model fitting. First, X-ray stereophotogrammetry was used to reconstruct the 3D coordinates of points located on 2D X-ray images of the scapula, acquired from two perspectives. A suitable calibration frame was used to map the image coordinates to their corresponding 3D realworld coordinates. The 3D point localization yielded average errors of (0.14, 0.07, 0.04) mm in the X, Y and Z coordinates respectively, and an absolute reconstruction error of 0.19 mm. The second phase assessed the reproducibility of the scapula landmarks reported by Ohl et al. (2010) and Borotikar et al. (2015). Only three (the inferior angle, acromion and the coracoid process) of the eight reproducible landmarks considered were selected as these were identifiable from the two different perspectives required for X-ray stereophotogrammetry in this project. For the last phase, an approximation of a scapula was produced with the aid of a statistical shape model (SSM) built from a training dataset of 84 CT scapulae. This involved constraining an SSM to the 3D reconstructed coordinates of the selected reproducible landmarks from 2D X-ray images. Comparison of the approximate model with a CT-derived ground truth 3D segmented volume resulted in surface-to-surface average distances of 4.28 mm and 3.20 mm, using three and sixteen landmarks respectively. Hence, increasing the number of landmarks produces a posterior model that makes better predictions of patientspecific reconstructions. An average Euclidean distance of 1.35 mm was obtained between the three selected landmarks on the approximation and the corresponding landmarks on the CT image. Conversely, a Euclidean distance of 5.99 mm was obtained between the three selected landmarks on the original SSM and corresponding landmarks on the CT image. The Euclidean distances confirm that a posterior model moves closer to the CT image, hence it reduces the search space for a more exact patient-specific 3D reconstruction by other fitting algorithms
    • …
    corecore