685 research outputs found
A Taxonomy of Deep Convolutional Neural Nets for Computer Vision
Traditional architectures for solving computer vision problems and the degree
of success they enjoyed have been heavily reliant on hand-crafted features.
However, of late, deep learning techniques have offered a compelling
alternative -- that of automatically learning problem-specific features. With
this new paradigm, every problem in computer vision is now being re-examined
from a deep learning perspective. Therefore, it has become important to
understand what kind of deep networks are suitable for a given problem.
Although general surveys of this fast-moving paradigm (i.e. deep-networks)
exist, a survey specific to computer vision is missing. We specifically
consider one form of deep networks widely used in computer vision -
convolutional neural networks (CNNs). We start with "AlexNet" as our base CNN
and then examine the broad variations proposed over time to suit different
applications. We hope that our recipe-style survey will serve as a guide,
particularly for novice practitioners intending to use deep-learning techniques
for computer vision.Comment: Published in Frontiers in Robotics and AI (http://goo.gl/6691Bm
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Unsupervised deep learning of human brain diffusion magnetic resonance imaging tractography data
L'imagerie par résonance magnétique de diffusion est une technique non invasive permettant de connaître la microstructure organisationnelle des tissus biologiques. Les méthodes computationnelles qui exploitent la préférence orientationnelle de la diffusion dans des structures restreintes pour révéler les voies axonales de la matière blanche du cerveau sont appelées tractographie. Ces dernières années, diverses méthodes de tractographie ont été utilisées avec succès pour découvrir l'architecture de la matière blanche du cerveau. Pourtant, ces techniques de reconstruction souffrent d'un certain nombre de défauts dérivés d'ambiguïtés fondamentales liées à l'information orientationnelle. Cela a des conséquences dramatiques, puisque les cartes de connectivité de la matière blanche basées sur la tractographie sont dominées par des faux positifs. Ainsi, la grande proportion de voies invalides récupérées demeure un des principaux défis à résoudre par la tractographie pour obtenir une description anatomique fiable de la matière blanche. Des approches méthodologiques innovantes sont nécessaires pour aider à résoudre ces questions.
Les progrès récents en termes de puissance de calcul et de disponibilité des données ont rendu possible l'application réussie des approches modernes d'apprentissage automatique à une variété de problèmes, y compris les tâches de vision par ordinateur et d'analyse d'images. Ces méthodes modélisent et trouvent les motifs sous-jacents dans les données, et permettent de faire des prédictions sur de nouvelles données. De même, elles peuvent permettre d'obtenir des représentations compactes des caractéristiques intrinsèques des données d'intérêt. Les approches modernes basées sur les données, regroupées sous la famille des méthodes d'apprentissage profond, sont adoptées pour résoudre des tâches d'analyse de données d'imagerie médicale, y compris la tractographie. Dans ce contexte, les méthodes deviennent moins dépendantes des contraintes imposées par les approches classiques utilisées en tractographie. Par conséquent, les méthodes inspirées de l'apprentissage profond conviennent au changement de paradigme requis, et peuvent ouvrir de nouvelles possibilités de modélisation, en améliorant ainsi l'état de l'art en tractographie.
Dans cette thèse, un nouveau paradigme basé sur les techniques d'apprentissage de représentation est proposé pour générer et analyser des données de tractographie. En exploitant les architectures d'autoencodeurs, ce travail tente d'explorer leur capacité à trouver un code optimal pour représenter les caractéristiques des fibres de la matière blanche. Les contributions proposées exploitent ces représentations pour une variété de tâches liées à la tractographie, y compris (i) le filtrage et (ii) le regroupement efficace sur les résultats générés par d'autres méthodes, ainsi que (iii) la reconstruction proprement dite des fibres de la matière blanche en utilisant une méthode générative. Ainsi, les méthodes issues de cette thèse ont été nommées (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), et (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectivement. Les performances des méthodes proposées sont évaluées par rapport aux méthodes de l'état de l'art sur des données de diffusion synthétiques et des données de cerveaux humains chez l'adulte sain in vivo. Les résultats montrent que (i) la méthode de filtrage proposée offre une sensibilité et spécificité supérieures par rapport à d'autres méthodes de l'état de l'art; (ii) le regroupement des tractes dans des faisceaux est fait de manière consistante; et (iii) l'approche générative échantillonnant des tractes comble mieux l'espace de la matière blanche dans des régions difficiles à reconstruire. Enfin, cette thèse révèle les possibilités des autoencodeurs pour l'analyse des données des fibres de la matière blanche, et ouvre la voie à fournir des données de tractographie plus fiables.Abstract : Diffusion magnetic resonance imaging is a non-invasive technique providing insights into the organizational microstructure of biological tissues. The computational methods that exploit the orientational preference of the diffusion in restricted structures to reveal the brain's white matter axonal pathways are called tractography. In recent years, a variety of tractography methods have been successfully used to uncover the brain's white matter architecture. Yet, these reconstruction techniques suffer from a number of shortcomings derived from fundamental ambiguities inherent to the orientation information. This has dramatic consequences, since current tractography-based white matter connectivity maps are dominated by false positive connections. Thus, the large proportion of invalid pathways recovered remains one of the main challenges to be solved by tractography to obtain a reliable anatomical description of the white matter. Methodological innovative approaches are required to help solving these questions. Recent advances in computational power and data availability have made it possible to successfully apply modern machine learning approaches to a variety of problems, including computer vision and image analysis tasks. These methods model and learn the underlying patterns in the data, and allow making accurate predictions on new data. Similarly, they may enable to obtain compact representations of the intrinsic features of the data of interest. Modern data-driven approaches, grouped under the family of deep learning methods, are being adopted to solve medical imaging data analysis tasks, including tractography. In this context, the proposed methods are less dependent on the constraints imposed by current tractography approaches. Hence, deep learning-inspired methods are suit for the required paradigm shift, may open new modeling possibilities, and thus improve the state of the art in tractography. In this thesis, a new paradigm based on representation learning techniques is proposed to generate and to analyze tractography data. By harnessing autoencoder architectures, this work explores their ability to find an optimal code to represent the features of the white matter fiber pathways. The contributions exploit such representations for a variety of tractography-related tasks, including efficient (i) filtering and (ii) clustering on results generated by other methods, and (iii) the white matter pathway reconstruction itself using a generative method. The methods issued from this thesis have been named (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), and (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectively. The proposed methods' performance is assessed against current state-of-the-art methods on synthetic data and healthy adult human brain in vivo data. Results show that the (i) introduced filtering method has superior sensitivity and specificity over other state-of-the-art methods; (ii) the clustering method groups streamlines into anatomically coherent bundles with a high degree of consistency; and (iii) the generative streamline sampling technique successfully improves the white matter coverage in hard-to-track bundles. In summary, this thesis unlocks the potential of deep autoencoder-based models for white matter data analysis, and paves the way towards delivering more reliable tractography data
Masked Autoencoder for Unsupervised Video Summarization
Summarizing a video requires a diverse understanding of the video, ranging
from recognizing scenes to evaluating how much each frame is essential enough
to be selected as a summary. Self-supervised learning (SSL) is acknowledged for
its robustness and flexibility to multiple downstream tasks, but the video SSL
has not shown its value for dense understanding tasks like video summarization.
We claim an unsupervised autoencoder with sufficient self-supervised learning
does not need any extra downstream architecture design or fine-tuning weights
to be utilized as a video summarization model. The proposed method to evaluate
the importance score of each frame takes advantage of the reconstruction score
of the autoencoder's decoder. We evaluate the method in major unsupervised
video summarization benchmarks to show its effectiveness under various
experimental settings
Data-Driven Modeling For Decision Support Systems And Treatment Management In Personalized Healthcare
Massive amount of electronic medical records (EMRs) accumulating from patients and populations motivates clinicians and data scientists to collaborate for the advanced analytics to create knowledge that is essential to address the extensive personalized insights needed for patients, clinicians, providers, scientists, and health policy makers. Learning from large and complicated data is using extensively in marketing and commercial enterprises to generate personalized recommendations. Recently the medical research community focuses to take the benefits of big data analytic approaches and moves to personalized (precision) medicine. So, it is a significant period in healthcare and medicine for transferring to a new paradigm. There is a noticeable opportunity to implement a learning health care system and data-driven healthcare to make better medical decisions, better personalized predictions; and more precise discovering of risk factors and their interactions. In this research we focus on data-driven approaches for personalized medicine. We propose a research framework which emphasizes on three main phases: 1) Predictive modeling, 2) Patient subgroup analysis and 3) Treatment recommendation. Our goal is to develop novel methods for each phase and apply them in real-world applications.
In the fist phase, we develop a new predictive approach based on feature representation using deep feature learning and word embedding techniques. Our method uses different deep architectures (Stacked autoencoders, Deep belief network and Variational autoencoders) for feature representation in higher-level abstractions to obtain effective and more robust features from EMRs, and then build prediction models on the top of them. Our approach is particularly useful when the unlabeled data is abundant whereas labeled one is scarce. We investigate the performance of representation learning through a supervised approach. We perform our method on different small and large datasets. Finally we provide a comparative study and show that our predictive approach leads to better results in comparison with others.
In the second phase, we propose a novel patient subgroup detection method, called Supervised Biclustring (SUBIC) using convex optimization and apply our approach to detect patient subgroups and prioritize risk factors for hypertension (HTN) in a vulnerable demographic subgroup (African-American). Our approach not only finds patient subgroups with guidance of a clinically relevant target variable but also identifies and prioritizes risk factors by pursuing sparsity of the input variables and encouraging similarity among the input variables and between the input and target variables.
Finally, in the third phase, we introduce a new survival analysis framework using deep learning and active learning with a novel sampling strategy. First, our approach provides better representation with lower dimensions from clinical features using labeled (time-to-event) and unlabeled (censored) instances and then actively trains the survival model by labeling the censored data using an oracle. As a clinical assistive tool, we propose a simple yet effective treatment recommendation approach based on our survival model. In the experimental study, we apply our approach on SEER-Medicare data related to prostate cancer among African-Americans and white patients. The results indicate that our approach outperforms significantly than baseline models
- …