203 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationImage segmentation entails the partitioning of an image domain, usually two or three dimensions, so that each partition or segment has some meaning that is relevant to the application at hand. Accurate image segmentation is a crucial challenge in many disciplines, including medicine, computer vision, and geology. In some applications, heterogeneous pixel intensities; noisy, ill-defined, or diffusive boundaries; and irregular shapes with high variability can make it challenging to meet accuracy requirements. Various segmentation approaches tackle such challenges by casting the segmentation problem as an energy-minimization problem, and solving it using efficient optimization algorithms. These approaches are broadly classified as either region-based or edge (surface)-based depending on the features on which they operate. The focus of this dissertation is on the development of a surface-based energy model, the design of efficient formulations of optimization frameworks to incorporate such energy, and the solution of the energy-minimization problem using graph cuts. This dissertation utilizes a set of four papers whose motivation is the efficient extraction of the left atrium wall from the late gadolinium enhancement magnetic resonance imaging (LGE-MRI) image volume. This dissertation utilizes these energy formulations for other applications, including contact lens segmentation in the optical coherence tomography (OCT) data and the extraction of geologic features in seismic data. Chapters 2 through 5 (papers 1 through 4) explore building a surface-based image segmentation model by progressively adding components to improve its accuracy and robustness. The first paper defines a parametric search space and its discrete formulation in the form of a multilayer three-dimensional mesh model within which the segmentation takes place. It includes a generative intensity model, and we optimize using a graph formulation of the surface net problem. The second paper proposes a Bayesian framework with a Markov random field (MRF) prior that gives rise to another class of surface nets, which provides better segmentation with smooth boundaries. The third paper presents a maximum a posteriori (MAP)-based surface estimation framework that relies on a generative image model by incorporating global shape priors, in addition to the MRF, within the Bayesian formulation. Thus, the resulting surface not only depends on the learned model of shapes,but also accommodates the test data irregularities through smooth deviations from these priors. Further, the paper proposes a new shape parameter estimation scheme, in closed form, for segmentation as a part of the optimization process. Finally, the fourth paper (under review at the time of this document) presents an extensive analysis of the MAP framework and presents improved mesh generation and generative intensity models. It also performs a thorough analysis of the segmentation results that demonstrates the effectiveness of the proposed method qualitatively, quantitatively, and clinically. Chapter 6, consisting of unpublished work, demonstrates the application of an MRF-based Bayesian framework to segment coupled surfaces of contact lenses in optical coherence tomography images. This chapter also shows an application related to the extraction of geological structures in seismic volumes. Due to the large sizes of seismic volume datasets, we also present fast, approximate surface-based energy minimization strategies that achieve better speed-ups and memory consumption

    IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION

    Get PDF
    Techniques for processing and analysing images and medical data have become the main’s translational applications and researches in clinical and pre-clinical environments. The advantages of these techniques are the improvement of diagnosis accuracy and the assessment of treatment response by means of quantitative biomarkers in an efficient way. In the era of the personalized medicine, an early and efficacy prediction of therapy response in patients is still a critical issue. In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high quality detailed images and excellent soft-tissue contrast, while Computerized Tomography (CT) images provides attenuation maps and very good hard-tissue contrast. In this context, Positron Emission Tomography (PET) is a non-invasive imaging technique which has the advantage, over morphological imaging techniques, of providing functional information about the patient’s disease. In the last few years, several criteria to assess therapy response in oncological patients have been proposed, ranging from anatomical to functional assessments. Changes in tumour size are not necessarily correlated with changes in tumour viability and outcome. In addition, morphological changes resulting from therapy occur slower than functional changes. Inclusion of PET images in radiotherapy protocols is desirable because it is predictive of treatment response and provides crucial information to accurately target the oncological lesion and to escalate the radiation dose without increasing normal tissue injury. For this reason, PET may be used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the nature of PET images (low spatial resolution, high noise and weak boundary), metabolic image processing is a critical task. The aim of this Ph.D thesis is to develope smart methodologies applied to the medical imaging field to analyse different kind of problematic related to medical images and data analysis, working closely to radiologist physicians. Various issues in clinical environment have been addressed and a certain amount of improvements has been produced in various fields, such as organs and tissues segmentation and classification to delineate tumors volume using meshing learning techniques to support medical decision. In particular, the following topics have been object of this study: • Technique for Crohn’s Disease Classification using Kernel Support Vector Machine Based; • Automatic Multi-Seed Detection For MR Breast Image Segmentation; • Tissue Classification in PET Oncological Studies; • KSVM-Based System for the Definition, Validation and Identification of the Incisinal Hernia Reccurence Risk Factors; • A smart and operator independent system to delineate tumours in Positron Emission Tomography scans; 3 • Active Contour Algorithm with Discriminant Analysis for Delineating Tumors in Positron Emission Tomography; • K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor Volumes; • Tissue Classification to Support Local Active Delineation of Brain Tumors; • A fully automatic system of Positron Emission Tomography Study segmentation. This work has been developed in collaboration with the medical staff and colleagues at the: • Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi (DIBIMED), University of Palermo • Cannizzaro Hospital of Catania • Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale delle Ricerche (CNR) of Cefalù • School of Electrical and Computer Engineering at Georgia Institute of Technology The proposed contributions have produced scientific publications in indexed computer science and medical journals and conferences. They are very useful in terms of PET and MRI image segmentation and may be used daily as a Medical Decision Support Systems to enhance the current methodology performed by healthcare operators in radiotherapy treatments. The future developments of this research concern the integration of data acquired by image analysis with the managing and processing of big data coming from a wide kind of heterogeneous sources

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Proceedings Virtual Imaging Trials in Medicine 2024

    Get PDF
    This submission comprises the proceedings of the 1st Virtual Imaging Trials in Medicine conference, organized by Duke University on April 22-24, 2024. The listed authors serve as the program directors for this conference. The VITM conference is a pioneering summit uniting experts from academia, industry and government in the fields of medical imaging and therapy to explore the transformative potential of in silico virtual trials and digital twins in revolutionizing healthcare. The proceedings are categorized by the respective days of the conference: Monday presentations, Tuesday presentations, Wednesday presentations, followed by the abstracts for the posters presented on Monday and Tuesday

    NON-RIGID BODY MECHANICAL PROPERTY RECOVERY FROM IMAGES AND VIDEOS

    Get PDF
    Material property has great importance in surgical simulation and virtual reality. The mechanical properties of the human soft tissue are critical to characterize the tissue deformation of each patient. Studies have shown that the tissue stiffness described by the tissue properties may indicate abnormal pathological process. The (recovered) elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures. Traditional elasticity parameters estimation methods rely largely on known external forces measured by special devices and strain field estimated by landmarks on the deformable bodies. Or they are limited to mechanical property estimation for quasi-static deformation. For virtual reality applications such as virtual try-on, garment material capturing is of equal significance as the geometry reconstruction. In this thesis, I present novel approaches for automatically estimating the material properties of soft bodies from images or from a video capturing the motion of the deformable body. I use a coupled simulation-optimization-identification framework to deform one soft body at its original, non-deformed state to match the deformed geometry of the same object in its deformed state. The optimal set of material parameters is thereby determined by minimizing the error metric function. This method can simultaneously recover the elasticity parameters of multiple regions of soft bodies using Finite Element Method-based simulation (of either linear or nonlinear materials undergoing large deformation) and particle-swarm optimization methods. I demonstrate the effectiveness of this approach on real-time interaction with virtual organs in patient-specific surgical simulation, using parameters acquired from low-resolution medical images. With the recovered elasticity parameters and the age of the prostate cancer patients as features, I build a cancer grading and staging classifier. The classifier achieves up to 91% for predicting cancer T-Stage and 88% for predicting Gleason score. To recover the mechanical properties of soft bodies from a video, I propose a method which couples statistical graphical model with FEM simulation. Using this method, I can recover the material properties of a soft ball from a high-speed camera video that captures the motion of the ball. Furthermore, I extend the material recovery framework to fabric material identification. I propose a novel method for garment material extraction from a single-view image and a learning based cloth material recovery method from a video recording the motion of the cloth. Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, I propose a method that can compute a 3D model of a human body and its outfit from a single photograph with little human interaction. My proposed learning-based cloth material type recovery method exploits simulated data-set and deep neural network. I demonstrate the effectiveness of my algorithms by re-purposing the reconstructed garments for virtual try-on, garment transfer, and cloth animation on digital characters. With the recovered mechanical properties, one can construct a virtual world with soft objects exhibiting real-world behaviors.Doctor of Philosoph

    Proceedings of the First International Workshop on Mathematical Foundations of Computational Anatomy (MFCA'06) - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceNon-linear registration and shape analysis are well developed research topic in the medical image analysis community. There is nowadays a growing number of methods that can faithfully deal with the underlying biomechanical behaviour of intra-subject shape deformations. However, it is more difficult to relate the anatomical shape of different subjects. The goal of computational anatomy is to analyse and to statistically model this specific type of geometrical information. In the absence of any justified physical model, a natural attitude is to explore very general mathematical methods, for instance diffeomorphisms. However, working with such infinite dimensional space raises some deep computational and mathematical problems. In particular, one of the key problem is to do statistics. Likewise, modelling the variability of surfaces leads to rely on shape spaces that are much more complex than for curves. To cope with these, different methodological and computational frameworks have been proposed. The goal of the workshop was to foster interactions between researchers investigating the combination of geometry and statistics for modelling biological shape variability from image and surfaces. A special emphasis was put on theoretical developments, applications and results being welcomed as illustrations. Contributions were solicited in the following areas: * Riemannian and group theoretical methods on non-linear transformation spaces * Advanced statistics on deformations and shapes * Metrics for computational anatomy * Geometry and statistics of surfaces 26 submissions of very high quality were recieved and were reviewed by two members of the programm committee. 12 papers were finally selected for oral presentations and 8 for poster presentations. 16 of these papers are published in these proceedings, and 4 papers are published in the proceedings of MICCAI'06 (for copyright reasons, only extended abstracts are provided here)

    Multi-criteria optimization algorithms for high dose rate brachytherapy

    Get PDF
    L’objectif général de cette thèse est d’utiliser les connaissances en physique de la radiation, en programmation informatique et en équipement informatique à la haute pointe de la technologie pour améliorer les traitements du cancer. En particulier, l’élaboration d’un plan de traitement en radiothérapie peut être complexe et dépendant de l’utilisateur. Cette thèse a pour objectif de simplifier la planification de traitement actuelle en curiethérapie de la prostate à haut débit de dose (HDR). Ce projet a débuté à partir d’un algorithme de planification inverse largement utilisé, la planification de traitement inverse par recuit simulé (IPSA). Pour aboutir à un algorithme de planification inverse ultra-rapide et automatisé, trois algorithmes d’optimisation multicritères (MCO) ont été mis en oeuvre. Suite à la génération d’une banque de plans de traitement ayant divers compromis avec les algorithmes MCO, un plan de qualité a été automatiquement sélectionné. Dans la première étude, un algorithme MCO a été introduit pour explorer les frontières de Pareto en curiethérapie HDR. L’algorithme s’inspire de la fonctionnalité MCO intégrée au système Raystation (RaySearch Laboratories, Stockholm, Suède). Pour chaque cas, 300 plans de traitement ont été générés en série pour obtenir une approximation uniforme de la frontière de Pareto. Chaque plan optimal de Pareto a été calculé avec IPSA et chaque nouveau plan a été ajouté à la portion de la frontière de Pareto où la distance entre sa limite supérieure et sa limite inférieure était la plus grande. Dans une étude complémentaire, ou dans la seconde étude, un algorithme MCO basé sur la connaissance (kMCO) a été mis en oeuvre pour réduire le temps de calcul de l’algorithme MCO. Pour ce faire, deux stratégies ont été mises en oeuvre : une prédiction de l’espace des solutions cliniquement acceptables à partir de modèles de régression et d’un calcul parallèle des plans de traitement avec deux processeurs à six coeurs. En conséquence, une banque de plans de traitement de petite taille (14) a été générée et un plan a été sélectionné en tant que plan kMCO. L’efficacité de la planification et de la performance dosimétrique ont été comparées entre les plans approuvés par le médecin et les plans kMCO pour 236 cas. La troisième et dernière étude de cette thèse a été réalisée en coopération avec Cédric Bélanger. Un algorithme MCO (gMCO) basé sur l’utilisation d’un environnement de développement compatible avec les cartes graphiques a été mis en oeuvre pour accélérer davantage le calcul. De plus, un algorithme d’optimisation quasi-Newton a été implémenté pour remplacer le recuit simulé dans la première et la deuxième étude. De cette manière, un millier de plans de traitement avec divers compromis et équivalents à ceux générés par IPSA ont été calculés en parallèle. Parmi la banque de plans de traitement généré par l’agorithme gMCO, un plan a été sélectionné (plan gMCO). Le temps de planification et les résultats dosimétriques ont été comparés entre les plans approuvés par le médecin et les plans gMCO pour 457 cas. Une comparaison à grande échelle avec les plans approuvés par les radio-oncologues montre que notre dernier algorithme MCO (gMCO) peut améliorer l’efficacité de la planification du traitement (de quelques minutes à 9:4 s) ainsi que la qualité dosimétrique des plans de traitements (des plans passant de 92:6% à 99:8% selon les critères dosimétriques du groupe de traitement oncologique par radiation (RTOG)). Avec trois algorithmes MCO mis en oeuvre, cette thèse représente un effort soutenu pour développer un algorithme de planification inverse ultra-rapide, automatique et robuste en curiethérapie HDR.The overall purpose of this thesis is to use the knowledge of radiation physics, computer programming and computing hardware to improve cancer treatments. In particular, designing a treatment plan in radiation therapy can be complex and user-dependent, and this thesis aims to simplify current treatment planning in high dose rate (HDR) prostate brachytherapy. This project was started from a widely used inverse planning algorithm, Inverse Planning Simulated Annealing (IPSA). In order to eventually lead to an ultra-fast and automatic inverse planning algorithm, three multi-criteria optimization (MCO) algorithms were implemented. With MCO algorithms, a desirable plan was selected after computing a set of treatment plans with various trade-offs. In the first study, an MCO algorithm was introduced to explore the Pareto surfaces in HDR brachytherapy. The algorithm was inspired by the MCO feature integrated in the Raystation system (RaySearch Laboratories, Stockholm, Sweden). For each case, 300 treatment plans were serially generated to obtain a uniform approximation of the Pareto surface. Each Pareto optimal plan was computed with IPSA, and each new plan was added to the Pareto surface portion where the distance between its upper boundary and its lower boundary was the largest. In a companion study, or the second study, a knowledge-based MCO (kMCO) algorithm was implemented to shorten the computation time of the MCO algorithm. To achieve this, two strategies were implemented: a prediction of clinical relevant solution space with previous knowledge, and a parallel computation of treatment plans with two six-core CPUs. As a result, a small size (14) plan dataset was created, and one plan was selected as the kMCO plan. The planning efficiency and the dosimetric performance were compared between the physician-approved plans and the kMCO plans for 236 cases. The third and final study of this thesis was conducted in cooperation with Cédric Bélanger. A graphics processing units (GPU) based MCO (gMCO) algorithm was implemented to further speed up the computation. Furthermore, a quasi-Newton optimization engine was implemented to replace simulated annealing in the first and the second study. In this way, one thousand IPSA equivalent treatment plans with various trade-offs were computed in parallel. One plan was selected as the gMCO plan from the calculated plan dataset. The planning time and the dosimetric results were compared between the physician-approved plans and the gMCO plans for 457 cases. A large-scale comparison against the physician-approved plans shows that our latest MCO algorithm (gMCO) can result in an improved treatment planning efficiency (from minutes to 9:4 s) as well as an improved treatment plan dosimetric quality (Radiation Therapy Oncology Group (RTOG) acceptance rate from 92.6% to 99.8%). With three implemented MCO algorithms, this thesis represents a sustained effort to develop an ultra-fast, automatic and robust inverse planning algorithm in HDR brachytherapy

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoĂŁoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf
    • …
    corecore