26 research outputs found

    Medical Image Segmentation with Deep Learning

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images is time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images have been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and spark research interests in medical image segmentation using deep learning. We propose two convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    CAD-Based Porous Scaffold Design of Intervertebral Discs in Tissue Engineering

    Get PDF
    With the development and maturity of three-dimensional (3D) printing technology over the past decade, 3D printing has been widely investigated and applied in the field of tissue engineering to repair damaged tissues or organs, such as muscles, skin, and bones, Although a number of automated fabrication methods have been developed to create superior bio-scaffolds with specific surface properties and porosity, the major challenges still focus on how to fabricate 3D natural biodegradable scaffolds that have tailor properties such as intricate architecture, porosity, and interconnectivity in order to provide the needed structural integrity, strength, transport, and ideal microenvironment for cell- and tissue-growth. In this dissertation, a robust pipeline of fabricating bio-functional porous scaffolds of intervertebral discs based on different innovative porous design methodologies is illustrated. Firstly, a triply periodic minimal surface (TPMS) based parameterization method, which has overcome the integrity problem of traditional TPMS method, is presented in Chapter 3. Then, an implicit surface modeling (ISM) approach using tetrahedral implicit surface (TIS) is demonstrated and compared with the TPMS method in Chapter 4. In Chapter 5, we present an advanced porous design method with higher flexibility using anisotropic radial basis function (ARBF) and volumetric meshes. Based on all these advanced porous design methods, the 3D model of a bio-functional porous intervertebral disc scaffold can be easily designed and its physical model can also be manufactured through 3D printing. However, due to the unique shape of each intervertebral disc and the intricate topological relationship between the intervertebral discs and the spine, the accurate localization and segmentation of dysfunctional discs are regarded as another obstacle to fabricating porous 3D disc models. To that end, we discuss in Chapter 6 a segmentation technique of intervertebral discs from CT-scanned medical images by using deep convolutional neural networks. Additionally, some examples of applying different porous designs on the segmented intervertebral disc models are demonstrated in Chapter 6

    Learning-based fully automated prediction of lumbar disc degeneration progression with specified clinical parameters and preliminary validation

    Get PDF
    Background: Lumbar disc degeneration (LDD) may be related to aging, biomechanical and genetic factors. Despite the extensive work on understanding its etiology, there is currently no automated tool for accurate prediction of its progression. / Purpose: We aim to establish a novel deep learning-based pipeline to predict the progression of LDD-related findings using lumbar MRIs. / Materials and methods: We utilized our dataset with MRIs acquired from 1,343 individual participants (taken at the baseline and the 5-year follow-up timepoint), and progression assessments (the Schneiderman score, disc bulging, and Pfirrmann grading) that were labelled by spine specialists with over ten years clinical experience. Our new pipeline was realized by integrating the MRI-SegFlow and the Visual Geometry Group-Medium (VGG-M) for automated disc region detection and LDD progression prediction correspondingly. The LDD progression was quantified by comparing the Schneiderman score, disc bulging and Pfirrmann grading at the baseline and at follow-up. A fivefold cross-validation was conducted to assess the predictive performance of the new pipeline. / Results: Our pipeline achieved very good performances on the LDD progression prediction, with high progression prediction accuracy of the Schneiderman score (Accuracy: 90.2 ± 0.9%), disc bulging (Accuracy: 90.4% ± 1.1%), and Pfirrmann grading (Accuracy: 89.9% ± 2.1%). / Conclusion: This is the first attempt of using deep learning to predict LDD progression on a large dataset with 5-year follow-up. Requiring no human interference, our pipeline can potentially achieve similar predictive performances in new settings with minimal efforts

    Development of a software system for surgical robots based on multimodal image fusion: study protocol

    Get PDF
    BackgroundSurgical robots are gaining increasing popularity because of their capability to improve the precision of pedicle screw placement. However, current surgical robots rely on unimodal computed tomography (CT) images as baseline images, limiting their visualization to vertebral bone structures and excluding soft tissue structures such as intervertebral discs and nerves. This inherent limitation significantly restricts the applicability of surgical robots. To address this issue and further enhance the safety and accuracy of robot-assisted pedicle screw placement, this study will develop a software system for surgical robots based on multimodal image fusion. Such a system can extend the application range of surgical robots, such as surgical channel establishment, nerve decompression, and other related operations.MethodsInitially, imaging data of the patients included in the study are collected. Professional workstations are employed to establish, train, validate, and optimize algorithms for vertebral bone segmentation in CT and magnetic resonance (MR) images, intervertebral disc segmentation in MR images, nerve segmentation in MR images, and registration fusion of CT and MR images. Subsequently, a spine application model containing independent modules for vertebrae, intervertebral discs, and nerves is constructed, and a software system for surgical robots based on multimodal image fusion is designed. Finally, the software system is clinically validated.DiscussionWe will develop a software system based on multimodal image fusion for surgical robots, which can be applied to surgical access establishment, nerve decompression, and other operations not only for robot-assisted nail placement. The development of this software system is important. First, it can improve the accuracy of pedicle screw placement, percutaneous vertebroplasty, percutaneous kyphoplasty, and other surgeries. Second, it can reduce the number of fluoroscopies, shorten the operation time, and reduce surgical complications. In addition, it would be helpful to expand the application range of surgical robots by providing key imaging data for surgical robots to realize surgical channel establishment, nerve decompression, and other operations
    corecore