122 research outputs found
Multiple ocular diseases detection by graph regularized multi-label learning
We develop a general framework for multiple ocular diseases diagnosis, based on Graph Regularized Multi-label Learning (GRML). Glaucoma, Pathological Myopia (PM), and Age-related Macular Degeneration (AMD) are three leading ocular diseases in the world. By exploiting the correlations among these three diseases, a novel GRML scheme is investigated for a simultaneous detection of these three leading ocular diseases for a given fundus image. We validate our GRML framework by conducting extensive experiments on SiMES dataset. The results show area under curve (AUC) of the receiver operating characteristic curve in multiple ocular diseases detection are much better than traditional popular algorithms. The method could be used for glaucoma, PM, and AMD diagnosis
Nonlocal Graph-PDEs and Riemannian Gradient Flows for Image Labeling
In this thesis, we focus on the image labeling problem which is the task of performing unique
pixel-wise label decisions to simplify the image while reducing its redundant information. We
build upon a recently introduced geometric approach for data labeling by assignment flows
[
APSS17
] that comprises a smooth dynamical system for data processing on weighted graphs.
Hereby we pursue two lines of research that give new application and theoretically-oriented
insights on the underlying segmentation task.
We demonstrate using the example of Optical Coherence Tomography (OCT), which is the
mostly used non-invasive acquisition method of large volumetric scans of human retinal tis-
sues, how incorporation of constraints on the geometry of statistical manifold results in a novel
purely data driven
geometric
approach for order-constrained segmentation of volumetric data
in any metric space. In particular, making diagnostic analysis for human eye diseases requires
decisive information in form of exact measurement of retinal layer thicknesses that has be done
for each patient separately resulting in an demanding and time consuming task. To ease the
clinical diagnosis we will introduce a fully automated segmentation algorithm that comes up
with a high segmentation accuracy and a high level of built-in-parallelism. As opposed to many
established retinal layer segmentation methods, we use only local information as input without
incorporation of additional global shape priors. Instead, we achieve physiological order of reti-
nal cell layers and membranes including a new formulation of ordered pair of distributions in an
smoothed energy term. This systematically avoids bias pertaining to global shape and is hence
suited for the detection of anatomical changes of retinal tissue structure. To access the perfor-
mance of our approach we compare two different choices of features on a data set of manually
annotated
3
D OCT volumes of healthy human retina and evaluate our method against state of
the art in automatic retinal layer segmentation as well as to manually annotated ground truth
data using different metrics.
We generalize the recent work [
SS21
] on a variational perspective on assignment flows and
introduce a novel nonlocal partial difference equation (G-PDE) for labeling metric data on graphs.
The G-PDE is derived as nonlocal reparametrization of the assignment flow approach that was
introduced in
J. Math. Imaging & Vision
58(2), 2017. Due to this parameterization, solving the
G-PDE numerically is shown to be equivalent to computing the Riemannian gradient flow with re-
spect to a nonconvex potential. We devise an entropy-regularized difference-of-convex-functions
(DC) decomposition of this potential and show that the basic geometric Euler scheme for inte-
grating the assignment flow is equivalent to solving the G-PDE by an established DC program-
ming scheme. Moreover, the viewpoint of geometric integration reveals a basic way to exploit
higher-order information of the vector field that drives the assignment flow, in order to devise a
novel accelerated DC programming scheme. A detailed convergence analysis of both numerical
schemes is provided and illustrated by numerical experiments
Probabilistic Intra-Retinal Layer Segmentation in 3-D OCT Images Using Global Shape Regularization
With the introduction of spectral-domain optical coherence tomography (OCT),
resulting in a significant increase in acquisition speed, the fast and accurate
segmentation of 3-D OCT scans has become evermore important. This paper
presents a novel probabilistic approach, that models the appearance of retinal
layers as well as the global shape variations of layer boundaries. Given an OCT
scan, the full posterior distribution over segmentations is approximately
inferred using a variational method enabling efficient probabilistic inference
in terms of computationally tractable model components: Segmenting a full 3-D
volume takes around a minute. Accurate segmentations demonstrate the benefit of
using global shape regularization: We segmented 35 fovea-centered 3-D volumes
with an average unsigned error of 2.46 0.22 {\mu}m as well as 80 normal
and 66 glaucomatous 2-D circular scans with errors of 2.92 0.53 {\mu}m
and 4.09 0.98 {\mu}m respectively. Furthermore, we utilized the inferred
posterior distribution to rate the quality of the segmentation, point out
potentially erroneous regions and discriminate normal from pathological scans.
No pre- or postprocessing was required and we used the same set of parameters
for all data sets, underlining the robustness and out-of-the-box nature of our
approach.Comment: Accepted for publication in Medical Image Analysis (MIA), Elsevie
Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis
Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware
Methods and Techniques for Clinical Text Modeling and Analytics
Nowadays, a large portion of clinical data only exists in free text. The wide adoption of Electronic Health Records (EHRs) has enabled the increases in accessing to clinical documents, which provide challenges and opportunities for clinical Natural Language Processing (NLP) researchers. Given free-text clinical notes as input, an ideal system for clinical text understanding should have the ability to support clinical decisions. At corpus level, the system should recommend similar notes based on disease or patient types, and provide medication recommendation, or any other type of recommendations, based on patients' symptoms and other similar medical cases. At document level, it should return a list of important clinical concepts. Moreover, the system should be able to make diagnostic inferences over clinical concepts and output diagnosis. Unfortunately, current work has not systematically studied this system. This study focuses on developing and applying methods/techniques in different aspects of the system for clinical text understanding, at both corpus and document level. We deal with two major research questions: First, we explore the question of How to model the underlying relationships from clinical notes at corpus level? Documents clustering methods can group clinical notes into meaningful clusters, which can assist physicians and patients to understand medical conditions and diseases from clinical notes. We use Nonnegative Matrix Factorization (NMF) and Multi-view NMF to cluster clinical notes based on extracted medical concepts. The clustering results display latent patterns existed among clinical notes. Our method provides a feasible way to visualize a corpus of clinical documents. Based on extracted concepts, we further build a symptom-medication (Symp-Med) graph to model the Symp-Med relations in clinical notes corpus. We develop two Symp-Med matching algorithms to predict and recommend medications for patients based on their symptoms. Second, we want to solve the question of How to integrate structured knowledge with unstructured text to improve results for Clinical NLP tasks? On the one hand, the unstructured clinical text contains lots of information about medical conditions. On the other hand, structured Knowledge Bases (KBs) are frequently used for supporting clinical NLP tasks. We propose graph-regularized word embedding models to integrate knowledge from both KBs and free text. We evaluate our models on standard datasets and biomedical NLP tasks, and results showed encouraging improvements on both datasets. We further apply the graph-regularized word embedding models and present a novel approach to automatically infer the most probable diagnosis from a given clinical narrative.Ph.D., Information Studies -- Drexel University, 201
Recommended from our members
Synergizing human-machine intelligence: Visualizing, labeling, and mining the electronic health record
We live in a world where data surround us in every aspect of our lives. The key challenge for humans and machines is how we can make better use of such data. Imagine what would happen if you were to have intelligent machines that could give you insight into the data. Insight that will enable you to better 1) reason about, 2) learn, and 3) understand the underlying phenomena that produced the data. The possibilities of combined human-machine intelligence are endless and will impact our lives in ways we can not even imagine today.
Synergistic human-machine intelligence aims to facilitate the analytical reasoning and inference process of humans by creating machines that maximize a human's ability to 1) reason about, 2) learn, and 3) understand large, complex, and heterogeneous data. Combined human-machine intelligence is a powerful symbiosis of mutual benefit, in which we depend on the computational capabilities of the machine for the tasks we are not good at, and the machine requires human intervention for the tasks it performs poorly on.
This relationship provides a compelling alternative to either approach in isolation for solving today's and tomorrow's arising data challenges. In his regard, this dissertation proposes a diverse analytical framework that leverages synergistic human-machine intelligence to maximize a human's ability to better 1) reason about, 2) learn, and 3) understand different biomedical imaging and healthcare data present in the patient's electronic health record (EHR). Correspondingly, we approach the data analyses problem from the 1) visualization, 2) labeling, and 3) mining perspective and demonstrate the efficacy of our analytics on specific application scenarios and various data domains.
In the first part of this dissertation we explore the question how we can build intelligent imaging analytics that are commensurate with human capabilities and constraints, specifically for optimizing data visualization and automated labeling workflows. Our journey starts with heuristic rule-based analytical models that are derived from task-specific human knowledge. From this experience, we move on to data-driven analytics, where we adapt and combine the intelligence of the model based on prior information provided by the human and synthetic knowledge learned from partial data observations. Within this realm, we propose a novel Bayesian transductive Markov random field model that requires minimal human intervention and is able to cope with scarce label information to learn and infer object shapes in complex spatial, multimodal, spatio-temporal, and longitudinal data. We then study the question how machines can learn discriminative object representations from dense human provided label information by investigating learning and inference mechanisms that make use of deep learning architectures. The developed analytics can aid visualization and labeling tasks, which enables the interpretation and quantification of clinically relevant image information.
The second part explores the question how we can build data-driven analytics for exploratory analysis in longitudinal event data that are commensurate with human capabilities and constraints. We propose human-intuitive analytics that enable the representation and discovery of interpretable event patterns to ease knowledge absorption and comprehension of the employed analytics model and the underlying data. We propose a novel doubly-constrained convolutional sparse-coding framework that learns interpretable and shift-invariant latent temporal event patterns. We apply the model to mine complex event data in EHRs. By mapping the event space to heterogeneous patient encounters in the EHR we explore the linkage between healthcare resource utilization (HRU) in relation to disease severity. This linkage may help to better understand how disease specific co-morbidities and their clinical attributes incur different HRU patterns. Such insight helps to characterize the patient's care history, which then enables the comparison against clinical practice guidelines, the discovery of prevailing practices based on common HRU group patterns, and the identification of outliers that might indicate poor patient management
Face Mining in Wikipedia Biographies
RÉSUMÉ
Cette thèse présente quelques contributions à la recherche liées au thème de la création d’un système automatisé pour l’extraction de visages dans les pages de biographie sur Wikipédia. La première contribution majeure de ce travail est l’élaboration d’une solution au problème basé sur une nouvelle technique de modélisation graphique probabiliste. Nous utilisons l’inférence probabiliste pour faire des prédictions structurées dans les modèles construits dynamiquement afin d’identifier les véritables exemples de visages correspondant à l’objet d’une biographie parmi tous les visages détectés. Notre modèle probabiliste prend en considération l’information provenant de différentes
sources, dont : des résultats de comparaisons entre visages détectés, des métadonnées provenant des images de visage et de leurs détections, des images parentes, des données géospatiales, des noms de fichiers et des sous-titres. Nous croyons que cette recherche est également unique parce que nous sommes les premiers à présenter un système complet et une évaluation expérimentale de la tâche de l’extraction des visages humains dans la nature à une échelle de plus de 50 000 identités.
Une autre contribution majeure de nos travaux est le développement d’une nouvelle catégorie de modèles probabilistes discriminatifs basée sur une fonction logistique Beta-Bernoulli généralisée. À travers notre formulation novatrice, nous fournissons une nouvelle méthode d’approximation lisse de la perte 0-1, ainsi qu’une nouvelle catégorie de classificateurs probabilistes. Nous présentons certaines expériences réalisées à l’aide de cette technique pour : 1) une nouvelle forme de régression logistique que nous nommons la régression logistique Beta-Bernoulli généralisée ; 2) une version de cette même technique ; et enfin pour 3) notre modèle pour l’extraction des visages que l’on pourrait considérer comme une technique de prédiction structurée en combinant plusieurs sources multimédias. À travers ces expériences, nous démontrons que les différentes formes de cette nouvelle formulation Beta-Bernoulli améliorent la performance des méthodes de la régression logistique couramment utilisées ainsi que la performance des machines à vecteurs de support (SVM) linéaires et non linéaires dans le but d’une classification binaire. Pour évaluer notre technique, nous avons procédé à des tests de performance reconnus en utilisant différentes propriétés allant de celles qui sont de relativement petite taille à celles qui sont de relativement grande taille, en plus de se baser sur des problèmes ayant des caractéristiques clairsemées ou denses. Notre analyse montre que le modèle Beta-Bernoulli généralisé améliore les formes analogues de modèles classiques de la régression logistique et les machines à vecteurs de support et que lorsque nos
évaluations sont effectuées sur les ensembles de données à plus grande échelle, les résultats sont statistiquement significatifs. Une autre constatation est que l’approche est aussi robuste lorsqu’il s’agit de valeurs aberrantes. De plus, notre modèle d’extraction de visages atteint sa meilleure performance lorsque le sous-composant consistant d’un modèle discriminant d’entropie maximale est remplacé par notre modèle de Beta-Bernoulli généralisée de la régression logistique. Cela montre l’applicabilité générale de notre approche proposée pour une tâche de prédiction structurée. Autant que nous sachions, c’est la première fois qu’une approximation lisse de la perte 0-1 a été utilisée pour la classification structurée.
Enfin, nous avons exploré plus en profondeur un problème important lié à notre tâche d’extraction des visages – la localisation des points-clés denses sur les visages humains. Nous avons développé un pipeline complet qui résout le problème de localisation des points-clés en utilisant une approche par sous-espace localement linéaire. Notre modèle de localisation des points-clés est d’une efficacité comparable à l’état de l’art.----------ABSTRACT
This thesis presents a number of research contributions related to the theme of creating an automated system for extracting faces from Wikipedia biography pages. The first major contribution of this work is the formulation of a solution to the problem based on a novel probabilistic graphical modeling technique. We use probabilistic inference to make structured predictions in dynamically constructed models so as to identify true examples of faces corresponding to the subject of a biography among all detected faces. Our probabilistic model takes into account information from multiple sources, including: visual comparisons between detected faces, meta-data about facial images and their detections, parent images, image locations, image file names, and caption texts. We believe this research is also unique in that we are the first to present a complete system and an experimental evaluation for the task of mining wild human faces on the scale of over 50,000 identities.
The second major contribution of this work is the development of a new class of discriminative probabilistic models based on a novel generalized Beta-Bernoulli logistic function. Through our generalized Beta-Bernoulli formulation, we provide both a new smooth 0-1 loss approximation method and new class of probabilistic classifiers. We present experiments using this technique for: 1) a new form of Logistic Regression which we call generalized Beta-Bernoulli Logistic Regression, 2) a kernelized version of the aforementioned technique, and 3) our probabilistic face mining model, which can be regarded as a structured prediction technique that combines information from multimedia sources. Through experiments, we show that the different forms of this novel Beta-Bernoulli formulation improve upon the performance of both widely-used Logistic Regression methods and state-of-the-art linear and non-linear Support Vector Machine techniques for binary classification. To evaluate our technique, we have performed tests using a number of widely used benchmarks with different properties ranging from those that are comparatively small to those that are comparatively large in size, as well as problems with both sparse and dense features. Our analysis shows that the generalized Beta-Bernoulli model improves upon the analogous forms of classical Logistic Regression and Support Vector Machine models and that when our evaluations are performed on larger scale datasets, the results are statistically significant. Another finding is that the approach is also robust when dealing with outliers. Furthermore, our face mining model achieves it’s best performance when its sub-component consisting of a discriminative Maximum Entropy Model is replaced with our generalized Beta-Bernoulli Logistic Regression model. This shows the general applicability of our proposed approach for a structured prediction task. To the best of our knowledge, this represents the first time that a smooth approximation to the 0-1 loss has been used for structured predictions.
Finally, we have explored an important problem related to our face extraction task in more depth - the localization of dense keypoints on human faces. Therein, we have developed a complete pipeline that solves the keypoint localization problem using an adaptively estimated, locally linear subspace technique. Our keypoint localization model performs on par with state-of-the-art methods
- …