2,544 research outputs found
IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION
Techniques for processing and analysing images and medical data have become
the main’s translational applications and researches in clinical and pre-clinical
environments. The advantages of these techniques are the improvement of diagnosis
accuracy and the assessment of treatment response by means of quantitative biomarkers
in an efficient way. In the era of the personalized medicine, an early and
efficacy prediction of therapy response in patients is still a critical issue.
In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high
quality detailed images and excellent soft-tissue contrast, while Computerized
Tomography (CT) images provides attenuation maps and very good hard-tissue
contrast. In this context, Positron Emission Tomography (PET) is a non-invasive
imaging technique which has the advantage, over morphological imaging techniques,
of providing functional information about the patient’s disease.
In the last few years, several criteria to assess therapy response in oncological
patients have been proposed, ranging from anatomical to functional assessments.
Changes in tumour size are not necessarily correlated with changes in tumour
viability and outcome. In addition, morphological changes resulting from therapy
occur slower than functional changes. Inclusion of PET images in radiotherapy
protocols is desirable because it is predictive of treatment response and provides
crucial information to accurately target the oncological lesion and to escalate the
radiation dose without increasing normal tissue injury. For this reason, PET may be
used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the
nature of PET images (low spatial resolution, high noise and weak boundary),
metabolic image processing is a critical task.
The aim of this Ph.D thesis is to develope smart methodologies applied to the
medical imaging field to analyse different kind of problematic related to medical
images and data analysis, working closely to radiologist physicians.
Various issues in clinical environment have been addressed and a certain amount
of improvements has been produced in various fields, such as organs and tissues
segmentation and classification to delineate tumors volume using meshing learning
techniques to support medical decision.
In particular, the following topics have been object of this study:
• Technique for Crohn’s Disease Classification using Kernel Support Vector
Machine Based;
• Automatic Multi-Seed Detection For MR Breast Image Segmentation;
• Tissue Classification in PET Oncological Studies;
• KSVM-Based System for the Definition, Validation and Identification of the
Incisinal Hernia Reccurence Risk Factors;
• A smart and operator independent system to delineate tumours in Positron
Emission Tomography scans;
3
• Active Contour Algorithm with Discriminant Analysis for Delineating
Tumors in Positron Emission Tomography;
• K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor
Volumes;
• Tissue Classification to Support Local Active Delineation of Brain Tumors;
• A fully automatic system of Positron Emission Tomography Study
segmentation.
This work has been developed in collaboration with the medical staff and
colleagues at the:
• Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi
(DIBIMED), University of Palermo
• Cannizzaro Hospital of Catania
• Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale
delle Ricerche (CNR) of CefalĂą
• School of Electrical and Computer Engineering at Georgia Institute of
Technology
The proposed contributions have produced scientific publications in indexed
computer science and medical journals and conferences. They are very useful in
terms of PET and MRI image segmentation and may be used daily as a Medical
Decision Support Systems to enhance the current methodology performed by
healthcare operators in radiotherapy treatments.
The future developments of this research concern the integration of data acquired
by image analysis with the managing and processing of big data coming from a wide
kind of heterogeneous sources
An Automated Liver Vasculature Segmentation from CT Scans for Hepatic Surgical Planning
Liver vasculature segmentation is a crucial step for liver surgical planning. Segmentation of liver vasculature is an important part of the 3D visualisation of the liver anatomy. The spatial relationship between vessels and other liver structures, like tumors and liver anatomic segments, helps in reducing the surgical treatment risks. However, liver vessels segmentation is a challenging task, that is due to low contrast with neighboring parenchyma, the complex anatomy, the very thin branches and very small vessels. This paper introduces a fully automated framework consist of four steps to segment the vessels inside the liver organ. Firstly, in the preprocessing step, a combination of two filtering techniques are used to extract and enhance vessels inside the liver region, first the vesselness filter is used to extract the vessels structure, and then the anisotropic coherence enhancing diffusion (CED) filter is used to enhance the intensity within the tubular vessels structure. This step is followed by a smart multiple thresholding to extract the initial vasculature segmentation. The liver vasculature structures, including hepatic veins connected to the inferior vena cava and the portal veins, are extracted. Finally, the inferior vena cava is segmented and excluded from the vessels segmentation, as it is not considered as part of the liver vasculature structure. The liver vessel segmentation method is validated on the publically available 3DIRCAD datasets. Dice coefficient (DSC) is used to evaluate the method, the average DSC score achieved a score 68.5%. The proposed approach succeeded to segment liver vasculature from the liver envelope accurately, which makes it as potential tool for clinical preoperative planning
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Segmentation of striatal brain structures from high resolution pet images
Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and ComputersWe propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results
Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications
Computational methods for the analysis of functional 4D-CT chest images.
Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
AUTOMATIC RECOGNITION OF DENTAL PATHOLOGIES AS PART OF A CLINICAL DECISION SUPPORT PLATFORM
The current work is done within the context of Romanian National Program II (PNII) research project "Application for Using Image Data Mining and 3D Modeling in Dental Screening" (AIMMS). The AIMMS project aims to design a program that can detect anatomical information and possible pathological formations from a collection of digital imaging and communications in medicine (DICOM) images. The main function of the AIMMS platform is to provide the user with the opportunity to use an integrated dental support platform, using image processing techniques and 3D modeling. From the literature review, it can be found that for the detection and classification of teeth and dental pathologies existing studies are in their infancy. Therefore, the work reported in this article makes a scientific contribution in this field. In this article it is presented the relevant literature review and algorithms that were created for detection of dental pathologies in the context of research project AIMMS
Recommended from our members
Development of advanced 3D medical analysis tools for clinical training, diagnosis and treatment
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The objective of this PhD research was the development of novel 3D interactive medical platforms for medical image analysis, simulation and visualisation, with a focus on oncology images to support clinicians in managing the increasing amount of data provided by several medical image modalities.
DoctorEye and Automatic Tumour Detector platforms were developed through constant interaction and feedback from expert clinicians, integrating a number of innovations in algorithms and methods, concerning image handling, segmentation, annotation, visualisation and plug-in technologies. DoctorEye is already being used in a related tumour modelling EC project (ContraCancrum) and offers several robust algorithms and tools for fast annotation, 3D visualisation and measurements to assist the clinician in better understanding the pathology of the brain area and define the treatment. It is free to use upon request and offers a user friendly environment for clinicians as it simplifies the implementation of complex algorithms and methods. It integrates a sophisticated, simple-to-use plug-in technology allowing researchers to add algorithms and methods (e.g. tumour growth and simulation algorithms for improving therapy planning) and interactively check the results. Apart from diagnostic and research purposes, it supports clinical training as it allows an expert clinician to evaluate a clinical delineation by different clinical users. The Automatic Tumour Detector focuses on abdominal images, which are more complex than those of the brain. It supports full automatic 3D detection of kidney pathology in real-time as well as 3D advanced visualisation and measurements. This is achieved through an innovative method implementing Templates. They contain rules and parameters for the Automatic Recognition Framework defined interactively by engineers based on clinicians’ 3D Golden Standard models. The Templates enable the automatic detection of kidneys and their possible abnormalities (tumours, stones and cysts). The system also supports the transmission of these Templates to another expert for a second opinion. Future versions of the proposed platforms could integrate even more sophisticated algorithms and tools and offer fully computer-aided identification of a variety of other organs and their dysfunctions
Coronary Artery Segmentation and Motion Modelling
Conventional coronary artery bypass surgery requires invasive sternotomy and the
use of a cardiopulmonary bypass, which leads to long recovery period and has high
infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery
based on image guided robotic surgical approaches have been developed to allow the
clinicians to conduct the bypass surgery off-pump with only three pin holes incisions
in the chest cavity, through which two robotic arms and one stereo endoscopic camera
are inserted. However, the restricted field of view of the stereo endoscopic images leads
to possible vessel misidentification and coronary artery mis-localization. This results
in 20-30% conversion rates from TECAB surgery to the conventional approach.
We have constructed patient-specific 3D + time coronary artery and left ventricle
motion models from preoperative 4D Computed Tomography Angiography (CTA)
scans. Through temporally and spatially aligning this model with the intraoperative
endoscopic views of the patient's beating heart, this work assists the surgeon to identify
and locate the correct coronaries during the TECAB precedures. Thus this work has
the prospect of reducing the conversion rate from TECAB to conventional coronary
bypass procedures.
This thesis mainly focus on designing segmentation and motion tracking methods
of the coronary arteries in order to build pre-operative patient-specific motion models.
Various vessel centreline extraction and lumen segmentation algorithms are presented,
including intensity based approaches, geometric model matching method and
morphology-based method. A probabilistic atlas of the coronary arteries is formed
from a group of subjects to facilitate the vascular segmentation and registration procedures.
Non-rigid registration framework based on a free-form deformation model
and multi-level multi-channel large deformation diffeomorphic metric mapping are
proposed to track the coronary motion. The methods are applied to 4D CTA images
acquired from various groups of patients and quantitatively evaluated
- …