46 research outputs found
A method and software for segmentation of anatomic object ensembles by deformable m-reps: Deformable M-Reps
Deformable shape models (DSMs) comprise a general approach that shows great promise for automatic image segmentation. Published studies by others and our own research results strongly suggest that segmentation of a normal or near-normal object from 3D medical images will be most successful when the DSM approach uses 1) knowledge of the geometry of not only the target anatomic object but also the ensemble of objects providing context for the target object and 2) knowledge of the image intensities to be expected relative to the geometry of the target and contextual objects. The segmentation will be most efficient when the deformation operates at multiple object-related scales and uses deformations that include not just local translations but the biologically important transformations of bending and twisting, i.e., local rotation, and local magnification. In computer vision an important class of DSM methods uses explicit geometric models in a Bayesian statistical framework to provide a priori information used in posterior optimization to match the DSM against a target image. In this approach a DSM of the object to be segmented is placed in the target image data and undergoes a series of rigid and non-rigid transformations that deform the model to closely match the target object. The deformation process is driven by optimizing an objective function that has terms for the geometric typicality and model-to-image match for each instance of the deformed model. The success of this approach depends strongly on the object representation, i.e., the structural details and parameter set for the DSM, which in turn determines the analytic form of the objective function. This paper describes a form of DSM called m-reps that has or allows these properties, and a method of segmentation consisting of large to small scale posterior optimization of m-reps. Segmentation by deformable m-reps, together with the appropriate data representations, visualizations, and user interface, has been implemented in software that accomplishes 3D segmentations in a few minutes. Software for building and training models has also been developed. The methods underlying this software and its abilities are the subject of this paper
The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation
Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic
segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patientās anatomy and thereby supports surgeons during planning of various kinds of
surgeries.
Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organās shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly.
This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organās shape variation.
The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images
Multi-Atlas Segmentation of Biomedical Images: A Survey
Abstract Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing
Kern-basierte Lernverfahren fĆ¼r das virtuelle Screening
We investigate the utility of modern kernel-based machine learning methods for ligand-based virtual screening. In particular, we introduce a new graph kernel based on iterative graph similarity and optimal assignments, apply kernel principle component analysis to projection error-based novelty detection, and discover a new selective agonist of the peroxisome proliferator-activated receptor gamma using Gaussian process regression. Virtual screening, the computational ranking of compounds with respect to a predicted property, is a cheminformatics problem relevant to the hit generation phase of drug development. Its ligand-based variant relies on the similarity principle, which states that (structurally) similar compounds tend to have similar properties. We describe the kernel-based machine learning approach to ligand-based virtual screening; in this, we stress the role of molecular representations, including the (dis)similarity measures defined on them, investigate effects in high-dimensional chemical descriptor spaces and their consequences for similarity-based approaches, review literature recommendations on retrospective virtual screening, and present an example workflow. Graph kernels are formal similarity measures that are defined directly on graphs, such as the annotated molecular structure graph, and correspond to inner products. We review graph kernels, in particular those based on random walks, subgraphs, and optimal vertex assignments. Combining the latter with an iterative graph similarity scheme, we develop the iterative similarity optimal assignment graph kernel, give an iterative algorithm for its computation, prove convergence of the algorithm and the uniqueness of the solution, and provide an upper bound on the number of iterations necessary to achieve a desired precision. In a retrospective virtual screening study, our kernel consistently improved performance over chemical descriptors as well as other optimal assignment graph kernels. Chemical data sets often lie on manifolds of lower dimensionality than the embedding chemical descriptor space. Dimensionality reduction methods try to identify these manifolds, effectively providing descriptive models of the data. For spectral methods based on kernel principle component analysis, the projection error is a quantitative measure of how well new samples are described by such models. This can be used for the identification of compounds structurally dissimilar to the training samples, leading to projection error-based novelty detection for virtual screening using only positive samples. We provide proof of principle by using principle component analysis to learn the concept of fatty acids. The peroxisome proliferator-activated receptor (PPAR) is a nuclear transcription factor that regulates lipid and glucose metabolism, playing a crucial role in the development of type 2 diabetes and dyslipidemia. We establish a Gaussian process regression model for PPAR gamma agonists using a combination of chemical descriptors and the iterative similarity optimal assignment kernel via multiple kernel learning. Screening of a vendor library and subsequent testing of 15 selected compounds in a cell-based transactivation assay resulted in 4 active compounds. One compound, a natural product with cyclobutane scaffold, is a full selective PPAR gamma agonist (EC50 = 10 +/- 0.2 muM, inactive on PPAR alpha and PPAR beta/delta at 10 muM). The study delivered a novel PPAR gamma agonist, de-orphanized a natural bioactive product, and, hints at the natural product origins of pharmacophore patterns in synthetic ligands.Wir untersuchen moderne Kern-basierte maschinelle Lernverfahren fĆ¼r das Liganden-basierte virtuelle Screening. Insbesondere entwickeln wir einen neuen Graphkern auf Basis iterativer GraphƤhnlichkeit und optimaler Knotenzuordnungen, setzen die Kernhauptkomponentenanalyse fĆ¼r Projektionsfehler-basiertes Novelty Detection ein, und beschreiben die Entdeckung eines neuen selektiven Agonisten des Peroxisom-Proliferator-aktivierten Rezeptors gamma mit Hilfe von GauĆ-Prozess-Regression. Virtuelles Screening ist die rechnergestĆ¼tzte Priorisierung von MolekĆ¼len bezĆ¼glich einer vorhergesagten Eigenschaft. Es handelt sich um ein Problem der Chemieinformatik, das in der Trefferfindungsphase der Medikamentenentwicklung auftritt. Seine Liganden-basierte Variante beruht auf dem Ćhnlichkeitsprinzip, nach dem (strukturell) Ƥhnliche MolekĆ¼le tendenziell Ƥhnliche Eigenschaften haben. In unserer Beschreibung des Lƶsungsansatzes mit Kern-basierten Lernverfahren betonen wir die Bedeutung molekularer ReprƤsentationen, einschlieĆlich der auf ihnen definierten (Un)ƤhnlichkeitsmaĆe. Wir untersuchen Effekte in hochdimensionalen chemischen DeskriptorrƤumen, ihre Auswirkungen auf Ćhnlichkeits-basierte Verfahren und geben einen LiteraturĆ¼berblick zu Empfehlungen zur retrospektiven Validierung, einschlieĆlich eines Beispiel-Workflows. Graphkerne sind formale ĆhnlichkeitsmaĆe, die inneren Produkten entsprechen und direkt auf Graphen, z.B. annotierten molekularen Strukturgraphen, definiert werden. Wir geben einen LiteraturĆ¼berblick Ć¼ber Graphkerne, insbesondere solche, die auf zufƤlligen Irrfahrten, Subgraphen und optimalen Knotenzuordnungen beruhen. Indem wir letztere mit einem Ansatz zur iterativen GraphƤhnlichkeit kombinieren, entwickeln wir den iterative similarity optimal assignment Graphkern. Wir beschreiben einen iterativen Algorithmus, zeigen dessen Konvergenz sowie die Eindeutigkeit der Lƶsung, und geben eine obere Schranke fĆ¼r die Anzahl der benƶtigten Iterationen an. In einer retrospektiven Studie zeigte unser Graphkern konsistent bessere Ergebnisse als chemische Deskriptoren und andere, auf optimalen Knotenzuordnungen basierende Graphkerne. Chemische DatensƤtze liegen oft auf Mannigfaltigkeiten niedrigerer DimensionalitƤt als der umgebende chemische Deskriptorraum. Dimensionsreduktionsmethoden erlauben die Identifikation dieser Mannigfaltigkeiten und stellen dadurch deskriptive Modelle der Daten zur VerfĆ¼gung. FĆ¼r spektrale Methoden auf Basis der Kern-Hauptkomponentenanalyse ist der Projektionsfehler ein quantitatives MaĆ dafĆ¼r, wie gut neue Daten von solchen Modellen beschrieben werden. Dies kann zur Identifikation von MolekĆ¼len verwendet werden, die strukturell unƤhnlich zu den Trainingsdaten sind, und erlaubt so Projektionsfehler-basiertes Novelty Detection fĆ¼r virtuelles Screening mit ausschlieĆlich positiven Beispielen. Wir fĆ¼hren eine Machbarkeitsstudie zur Lernbarkeit des Konzepts von FettsƤuren durch die Hauptkomponentenanalyse durch. Der Peroxisom-Proliferator-aktivierte Rezeptor (PPAR) ist ein im Zellkern vorkommender Rezeptor, der den Fett- und Zuckerstoffwechsel reguliert. Er spielt eine wichtige Rolle in der Entwicklung von Krankheiten wie Typ-2-Diabetes und DyslipidƤmie. Wir etablieren ein GauĆ-Prozess-Regressionsmodell fĆ¼r PPAR gamma-Agonisten mit chemischen Deskriptoren und unserem Graphkern durch gleichzeitiges Lernen mehrerer Kerne. Das Screening einer kommerziellen Substanzbibliothek und die anschlieĆende Testung 15 ausgewƤhlter Substanzen in einem Zell-basierten Transaktivierungsassay ergab vier aktive Substanzen. Eine davon, ein Naturstoff mit Cyclobutan-GrundgerĆ¼st, ist ein voller selektiver PPAR gamma-Agonist (EC50 = 10 +/- 0,2 muM, inaktiv auf PPAR alpha und PPAR beta/delta bei 10 muM). Unsere Studie liefert einen neuen PPAR gamma-Agonisten, legt den Wirkmechanismus eines bioaktiven Naturstoffs offen, und erlaubt RĆ¼ckschlĆ¼sse auf die NaturstoffursprĆ¼nge von Pharmakophormustern in synthetischen Liganden
Graphlet-adjacencies provide complementary views on the functional organisation of the cell and cancer mechanisms
Recent biotechnological advances have led to a wealth of biological network data. Topo- logical analysis of these networks (i.e., the analysis of their structure) has led to break- throughs in biology and medicine. The state-of-the-art topological node and network descriptors are based on graphlets, induced connected subgraphs of different shapes (e.g., paths, triangles). However, current graphlet-based methods ignore neighbourhood infor- mation (i.e., what nodes are connected). Therefore, to capture topology and connectivity information simultaneously, I introduce graphlet adjacency, which considers two nodes adjacent based on their frequency of co-occurrence on a given graphlet. I use graphlet adjacency to generalise spectral methods and apply these on molecular networks. I show that, depending on the chosen graphlet, graphlet spectral clustering uncovers clusters en- riched in different biological functions, and graphlet diffusion of gene mutation scores predicts different sets of cancer driver genes. This demonstrates that graphlet adjacency captures topology-function and topology-disease relationships in molecular networks.
To further detail these relationships, I take a pathway-focused approach. To enable this investigation, I introduce graphlet eigencentrality to compute the importance of a gene in a pathway either from the local pathway perspective or from the global network perspective. I show that pathways are best described by the graphlet adjacencies that capture the importance of their functionally critical genes. I also show that cancer driver genes characteristically perform hub roles between pathways.
Given the latter finding, I hypothesise that cancer pathways should be identified by changes in their pathway-pathway relationships. Within this context, I propose pathway- driven non-negative matrix tri-factorisation (PNMTF), which fuses molecular network data and pathway annotations to learn an embedding space that captures the organisation of a network as a composition of subnetworks. In this space, I measure the functional importance of a pathway or gene in the cell and its functional disruption in cancer. I apply this method to predict genes and the pathways involved in four major cancers. By using graphlet-adjacency, I can exploit the tendency of cancer-related genes to perform hub roles to improve the prediction accuracy
Regional Appearance Modeling based on the Clustering of Intensity Profiles
International audienceModel-based image segmentation is a popular approach for the segmentation of anatomical structures from medical images because it includes prior knowledge about the shape and appearance of structures of interest. This paper focuses on the formulation of a novel appearance prior that can cope with large variability between subjects, for instance due to the presence of pathologies. Instead of relying on Principal Component Analysis such as in Statistical Appearance Models, our approach relies on a multimodal intensity profi le atlas from which a point may be assigned to several profi le modes consisting of a mean pro le and its covariance matrix. These profi le modes are first estimated without any intra-subject registration through a boosted EM classi cation based on spectral clustering. Then, they are projected on a reference mesh whose role is to store the appearance information in a common geometric representation. We show that this prior leads to better performance than the classical monomodal Principal Component Analysis approach while relying on fewer pro file modes
Recommended from our members
Quantifying, Understanding and Predicting Differences Between Planned and Delivered Dose to Organs at Risk in Head & Neck Cancer Patients Undergoing Radical Radiotherapy to Promote Intelligently Targeted Adaptive Radiotherapy
Introduction: Radical radiotherapy (RT) is an effective but toxic treatment for head and neck cancer (HNC). Contemporary radiotherapy techniques sculpt dose to target disease and avoid organs at risk (OARs), but anatomical change during treatment mean that the radiation dose delivered to the patient ā delivered dose (DA), is different to that anticipated at planning ā planned dose (DP). Modifying the RT plan during treatment ā Adaptive Radiotherapy (ART) ā could mitigate these risks by reducing dose to OARs. However, clinical data to guide patient selection for, and timing of ART, are for lacking.
Methods: 337 patients with HNC were recruited to the Cancer Research UK VoxTox study. Demographic, disease and treatment data were collated, and both DP and DA to organs at risk (OARs) were computed from daily megavoltage CT image guidance scans, using an open-source deformable image registration package (Elastix). Toxicity data were prospectively collected. Relationships between DP, DA and late toxicities were investigated with univariate, and logistic regression normal tissue complication probability (NTCP) modelling approaches. A sub-study of VoxTox recruited 18 patients who had MRI scans before RT fractions 1, 6, 16, and 26. Changes in salivary gland volumes and relative apparent diffusion coefficient (ADC) values were measured and related to toxicity events.
Results: Spinal cord dose differences were small, and not predicted by weight loss or shape change. Mean DA to all other OARs was higher than DP; factors predicting higher DA included primary disease site, concomitant therapy, shape change and advanced neck disease. Nine patients (3.7%) saw DA>DP by 2Gy to more than half of the OARs assessed. These patients all had received bilateral neck RT for N-stage 2b oropharyngeal cancer. Strong uni- and multivariate relationships between OAR dose and toxicity were seen. Differences between DA and DP-based dose-toxicity models were minimal, and not statistically significant. On MRI, both parotid and submandibular glands shrank during treatment, whilst relative ADC rose. Relationships with toxicity were inconclusive.
Conclusions: Small differences between OAR DP and DA mean that DA-based toxicity prediction models confer negligible additional benefit at the population level. Factors such as primary disease sub-site, concomitant systemic therapy, staging and shape change may help to select the patients that do develop clinically significant dose differences, and would benefit most from ART for toxicity reduction
Recommended from our members
Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring
INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume
delineation remains one of the greatest sources of error in the radiotherapy delivery process,
which can lead to poor tumour control probability and impact clinical outcome. Contouring
assessments are performed to ensure high quality of target volume definition in clinical trials
but this can be subjective and labour-intensive.
This project addresses the hypothesis that computational segmentation techniques, with a given
prior, can be used to develop an image-based tumour delineation process for contour
assessments. This thesis focuses on the exploration of the segmentation techniques to develop
an automated method for generating reference delineations in the setting of advanced lung
cancer. The novelty of this project is in the use of the initial clinician outline as a prior for
image segmentation.
METHODS: Automated segmentation processes were developed for stage II and III non-small
cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed
segmentation, two active contour approaches (edge- and region-based) and graph-cut applied
on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from
normal tissues based on texture features was also investigated.
RESULTS: 63 cases were used for development and training. Segmentation and classification
performance were evaluated on an independent test set of 16 cases. Edge-based active contour
segmentation achieved highest Dice similarity coefficient of 0.80 Ā± 0.06, followed by graphcut
at 0.76 Ā± 0.06, watershed at 0.72 Ā± 0.08 and region-based active contour at 0.71 Ā± 0.07,
with mean computational times of 192 Ā± 102 sec, 834 Ā± 438 sec, 21 Ā± 5 sec and 45 Ā± 18 sec
per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation
leakages at the mediastinum were observed.
In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and
15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher
misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the
analysis of the tumour boundary.
CONCLUSIONS: Conventional image-based segmentation techniques with the application of
priors are useful in automatic segmentation of tumours, although further developments are
required to improve their performance. Texture classification can be useful in distinguishing
tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more
difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou