38 research outputs found
On Multilabel Classification Methods of Incompletely Labeled Biomedical Text Data
Multilabel classification is often hindered by incompletely labeled training datasets; for some items of such dataset (or even for all of them) some labels may be omitted. In this case, we cannot know if any item is labeled fully and correctly. When we train a classifier directly on incompletely labeled dataset, it performs ineffectively. To overcome the problem, we added an extra step, training set modification, before training a classifier. In this paper, we try two algorithms for training set modification: weighted k-nearest neighbor (WkNN) and soft supervised learning (SoftSL). Both of these approaches are based on similarity measurements between data vectors. We performed the experiments on AgingPortfolio (text dataset) and then rechecked on the Yeast (nontext genetic data). We tried SVM and RF classifiers for the original datasets and then for the modified ones. For each dataset, our experiments demonstrated that both classification algorithms performed considerably better when preceded by the training set modification step
Label self-advised support vector machine (LSA-SVM)-automated classification of foot drop rehabilitation case study
© 2019 Veterinary World. All rights reserved. Stroke represents a major health problem in our society. One of the effects of stroke is foot drop. Foot drop (FD) is a weakness that occurs in specific muscles in the ankle and foot such as the anterior tibialis, gastrocnemius, plantaris and soleus muscles. Foot flexion and extension are normally generated by lower motor neurons (LMN). The affected muscles impact the ankle and foot in both downward and upward motions. One possible solution for FD is to investigate the movement based on the bio signal (myoelectric signal) of the muscles. Bio signal control systems like electromyography (EMG) are used for rehabilitation devices that include foot drop. One of these systems is function electrical stimulation (FES). This paper proposes new methods and algorithms to develop the performance of myoelectric pattern recognition (M-PR), to improve automated rehabilitation devices, to test these methodologies in offline and real-time experimental datasets. Label classifying is a predictive data mining application with multiple applications in the world, including automatic labeling of resources such as videos, music, images and texts. We combine the label classification method with the self-advised support vector machine (SA-SVM) to create an adapted and altered label classification method, named the label self-advised support vector machine (LSA-SVM). For the experimental data, we collected data from foot drop patients using the sEMG device, in the Metro Rehabilitation Hospital in Sydney, Australia using Ethical Approval (UTS HREC NO. ETH15-0152). The experimental results for the EMG dataset and benchmark datasets exhibit its benefits. Furthermore, the experimental results on UCI datasets indicate that LSA-SVM achieves the best performance when working together with SA-SVM and SVM. This paper describes the state-of-the-art procedures for M-PR and studies all the conceivable structures
The Emerging Trends of Multi-Label Learning
Exabytes of data are generated daily by humans, leading to the growing need
for new efforts in dealing with the grand challenges for multi-label learning
brought by big data. For example, extreme multi-label classification is an
active and rapidly growing research area that deals with classification tasks
with an extremely large number of classes or labels; utilizing massive data
with limited supervision to build a multi-label classification model becomes
valuable for practical applications, etc. Besides these, there are tremendous
efforts on how to harvest the strong learning capability of deep learning to
better capture the label dependencies in multi-label learning, which is the key
for deep learning to address real-world classification tasks. However, it is
noted that there has been a lack of systemic studies that focus explicitly on
analyzing the emerging trends and new challenges of multi-label learning in the
era of big data. It is imperative to call for a comprehensive survey to fulfill
this mission and delineate future research directions and new applications.Comment: Accepted to TPAMI 202
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
Large, labeled datasets have driven deep learning methods to achieve
expert-level performance on a variety of medical imaging tasks. We present
CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240
patients. We design a labeler to automatically detect the presence of 14
observations in radiology reports, capturing uncertainties inherent in
radiograph interpretation. We investigate different approaches to using the
uncertainty labels for training convolutional neural networks that output the
probability of these observations given the available frontal and lateral
radiographs. On a validation set of 200 chest radiographic studies which were
manually annotated by 3 board-certified radiologists, we find that different
uncertainty approaches are useful for different pathologies. We then evaluate
our best model on a test set composed of 500 chest radiographic studies
annotated by a consensus of 5 board-certified radiologists, and compare the
performance of our model to that of 3 additional radiologists in the detection
of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the
model ROC and PR curves lie above all 3 radiologist operating points. We
release the dataset to the public as a standard benchmark to evaluate
performance of chest radiograph interpretation models.
The dataset is freely available at
https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201
Clasificación de prescripciones médicas en español
El siguiente trabajo describe la problemática de la clasificación de textos médicos libres en español. Y propone una solución basada en los algoritmos de clasificación de texto: Naïve Bayes Multinomial (NBM) y Support Vector Machines (SVMs) justificando dichas decisiones y mostrando los resultados obtenidos con ambos métodos.Eje: XV Workshop de Agentes y Sistemas InteligentesRed de Universidades con Carreras de Informática (RedUNCI
Clasificación de distintos conjuntos de datos utilizados en evaluación de métodos de extracción de conocimiento creados para la web
En varios artículos se han utilizado distintos textos de prueba, como datos de entrada para medir el desempeño de los métodos de extracción de relaciones semánticas para la Web (OIE): ReVerb y ClausIE. Sin embargo estos textos nunca han sido analizados para entender si ellos guardan o no similitudes o para saber si existe entre ellos un lenguaje común o pertenecen a un mismo dominio. Es la intención de este trabajo analizar dichos textos utilizando distintos algoritmos de clasificación. Y comprender si se pueden agrupar de una forma coherente, de tal suerte que a priori uno pueda identificar que textos son los que trabajan mejor con ClausIE y cuales con ReVerb.XIII Workshop Bases de datos y Minería de Datos (WBDMD).Red de Universidades con Carreras en Informática (RedUNCI
Machine learning for the classification of atrial fibrillation utilizing seismo- and gyrocardiogram
A significant number of deaths worldwide are attributed to cardiovascular diseases (CVDs), accounting for approximately one-third of the total mortality in 2019, with an estimated 18 million deaths. The prevalence of CVDs has risen due to the increasing elderly population and improved life expectancy. Consequently, there is an escalating demand for higher-quality healthcare services. Technological advancements, particularly the use of wearable devices for remote patient monitoring, have significantly improved the diagnosis, treatment, and monitoring of CVDs.
Atrial fibrillation (AFib), an arrhythmia associated with severe complications and potential fatality, necessitates prolonged monitoring of heart activity for accurate diagnosis and severity assessment. Remote heart monitoring, facilitated by ECG Holter monitors, has become a popular approach in many cardiology clinics. However, in the absence of an ECG Holter monitor, other remote and widely available technologies can prove valuable. The seismo- and gyrocardiogram signals (SCG and GCG) provide information about the mechanical function of the heart, enabling AFib monitoring within or outside clinical settings. SCG and GCG signals can be conveniently recorded using smartphones, which are affordable and ubiquitous in most countries.
This doctoral thesis investigates the utilization of signal processing, feature engineering, and supervised machine learning techniques to classify AFib using short SCG and GCG measurements captured by smartphones. Multiple machine learning pipelines are examined, each designed to address specific objectives. The first objective (O1) involves evaluating the performance of supervised machine learning classifiers in detecting AFib using measurements conducted by physicians in a clinical setting. The second objective (O2) is similar to O1, but this time utilizing measurements taken by patients themselves. The third objective (03) explores the performance of machine learning classifiers in detecting acute decompensated heart failure (ADHF) using the same measurements as O1, which were primarily collected for AFib detection. Lastly, the fourth objective (O4) delves into the application of deep neural networks for automated feature learning and classification of AFib.
These investigations have shown that AFib detection is achievable by capturing a joint SCG and GCG recording and applying machine learning methods, yielding satisfactory performance outcomes. The primary focus of the examined approaches encompassed (1) feature engineering coupled with supervised classification, and (2) iv automated end-to-end feature learning and classification using deep convolutionalrecurrent neural networks.
The key finding from these studies is that SCG and GCG signals reliably capture the heart’s beating pattern, irrespective of the operator. This allows for the detection of irregular rhythm patterns, making this technology suitable for monitoring AFib episodes outside of hospital settings as a remote monitoring solution for individuals suspected to have AFib. This thesis demonstrates the potential of smartphone-based AFib detection using built-in inertial sensors. Notably, a short recording duration of 10 to 60 seconds yields clinically relevant results. However, it is important to recognize that the results for ADHF did not match the state-of-the-art achievements due to the limited availability of ADHF data combined with arrhythmias as well as the lack of a cardiopulmonary exercise test in the measurement setting.
Finally, it is important to recognize that SCG and GCG are not intended to replace clinical ECG measurements or long-term ambulatory Holter ECG recordings. Instead, within the scope of our current understanding, they should be regarded as complementary and supplementary technologies for cardiovascular monitoring
딥 뉴럴 네트워크를 활용한 의학 개념 및 환자 표현 학습과 의료 문제에의 응용
학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·정보공학부, 2022. 8. 정교민.본 학위 논문은 전국민 의료 보험데이터인 표본코호트DB를 활용하여 딥 뉴럴 네트워크 기반의 의학 개념 및 환자 표현 학습 방법과 의료 문제 해결 방법을 제안한다. 먼저 순차적인 환자 의료 기록과 개인 프로파일 정보를 기반으로 환자 표현을 학습하고 향후 질병 진단 가능성을 예측하는 재귀신경망 모델을 제안하였다. 우리는 다양한 성격의 환자 정보를 효율적으로 혼합하는 구조를 도입하여 큰 성능 향상을 얻었다. 또한 환자의 의료 기록을 이루는 의료 코드들을 분산 표현으로 나타내 추가 성능 개선을 이루었다. 이를 통해 의료 코드의 분산 표현이 중요한 시간적 정보를 담고 있음을 확인하였고, 이어지는 연구에서는 이러한 시간적 정보가 강화될 수 있도록 그래프 구조를 도입하였다. 우리는 의료 코드의 분산 표현 간의 유사도와 통계적 정보를 가지고 그래프를 구축하였고 그래프 뉴럴 네트워크를 활용, 시간/통계적 정보가 강화된 의료 코드의 표현 벡터를 얻었다. 획득한 의료 코드 벡터를 통해 시판 약물의 잠재적인 부작용 신호를 탐지하는 모델을 제안한 결과, 기존의 부작용 데이터베이스에 존재하지 않는 사례까지도 예측할 수 있음을 보였다. 마지막으로 분량에 비해 주요 정보가 희소하다는 의료 기록의 한계를 극복하기 위해 지식그래프를 활용하여 사전 의학 지식을 보강하였다. 이때 환자의 의료 기록을 구성하는 지식그래프의 부분만을 추출하여 개인화된 지식그래프를 만들고 그래프 뉴럴 네트워크를 통해 그래프의 표현 벡터를 획득하였다. 최종적으로 순차적인 의료 기록을 함축한 환자 표현과 더불어 개인화된 의학 지식을 함축한 표현을 함께 사용하여 향후 질병 및 진단 예측 문제에 활용하였다.This dissertation proposes a deep neural network-based medical concept and patient representation learning methods using medical claims data to solve two healthcare tasks, i.e., clinical outcome prediction and post-marketing adverse drug reaction (ADR) signal detection. First, we propose SAF-RNN, a Recurrent Neural Network (RNN)-based model that learns a deep patient representation based on the clinical sequences and patient characteristics. Our proposed model fuses different types of patient records using feature-based gating and self-attention. We demonstrate that high-level associations between two heterogeneous records are effectively extracted by our model, thus achieving state-of-the-art performances for predicting the risk probability of cardiovascular disease. Secondly, based on the observation that the distributed medical code embeddings represent temporal proximity between the medical codes, we introduce a graph structure to enhance the code embeddings with such temporal information. We construct a graph using the distributed code embeddings and the statistical information from the claims data. We then propose the Graph Neural Network(GNN)-based representation learning for post-marketing ADR detection. Our model shows competitive performances and provides valid ADR candidates. Finally, rather than using patient records alone, we utilize a knowledge graph to augment the patient representation with prior medical knowledge. Using SAF-RNN and GNN, the deep patient representation is learned from the clinical sequences and the personalized medical knowledge. It is then used to predict clinical outcomes, i.e., next diagnosis prediction and CVD risk prediction, resulting in state-of-the-art performances.1 Introduction 1
2 Background 8
2.1 Medical Concept Embedding 8
2.2 Encoding Sequential Information in Clinical Records 11
3 Deep Patient Representation with Heterogeneous Information 14
3.1 Related Work 16
3.2 Problem Statement 19
3.3 Method 20
3.3.1 RNN-based Disease Prediction Model 20
3.3.2 Self-Attentive Fusion (SAF) Encoder 23
3.4 Dataset and Experimental Setup 24
3.4.1 Dataset 24
3.4.2 Experimental Design 26
ii 3.4.3 Implementation Details 27
3.5 Experimental Results 28
3.5.1 Evaluation of CVD Prediction 28
3.5.2 Sensitivity Analysis 28
3.5.3 Ablation Studies 31
3.6 Further Investigation 32
3.6.1 Case Study: Patient-Centered Analysis 32
3.6.2 Data-Driven CVD Risk Factors 32
3.7 Conclusion 33
4 Graph-Enhanced Medical Concept Embedding 40
4.1 Related Work 42
4.2 Problem Statement 43
4.3 Method 44
4.3.1 Code Embedding Learning with Skip-gram Model 44
4.3.2 Drug-disease Graph Construction 45
4.3.3 A GNN-based Method for Learning Graph Structure 47
4.4 Dataset and Experimental Setup 49
4.4.1 Dataset 49
4.4.2 Experimental Design 50
4.4.3 Implementation Details 52
4.5 Experimental Results 53
4.5.1 Evaluation of ADR Detection 53
4.5.2 Newly-Described ADR Candidates 54
4.6 Conclusion 55
5 Knowledge-Augmented Deep Patient Representation 57
5.1 Related Work 60
5.1.1 Incorporating Prior Medical Knowledge for Clinical Outcome Prediction 60
5.1.2 Inductive KGC based on Subgraph Learning 61
5.2 Method 61
5.2.1 Extracting Personalized KG 61
5.2.2 KA-SAF: Knowledge-Augmented Self-Attentive Fusion Encoder 64
5.2.3 KGC as a Pre-training Task 68
5.2.4 Subgraph Infomax: SGI 69
5.3 Dataset and Experimental Setup 72
5.3.1 Clinical Outcome Prediction 72
5.3.2 Next Diagnosis Prediction 72
5.4 Experimental Results 73
5.4.1 Cardiovascular Disease Prediction 73
5.4.2 Next Diagnosis Prediction 73
5.4.3 KGC on SemMed KG 73
5.5 Conclusion 74
6 Conclusion 77
Abstract (In Korean) 90
Acknowlegement 92박
Towards Interpretable Machine Learning in Medical Image Analysis
Over the past few years, ML has demonstrated human expert level performance in many medical image analysis tasks. However, due to the black-box nature of classic deep ML models, translating these models from the bench to the bedside to support the corresponding stakeholders in the desired tasks brings substantial challenges. One solution is interpretable ML, which attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, interpretability is not a property of the ML model but an affordance, i.e., a relationship between algorithm and user. Thus, prototyping and user evaluations are critical to attaining solutions that afford interpretability. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users. This dilemma is further exacerbated by the high knowledge imbalance between ML designers and end users. To overcome the predicament, we first define 4 levels of clinical evidence that can be used to justify the interpretability to design ML models. We state that designing ML models with 2 levels of clinical evidence: 1) commonly used clinical evidence, such as clinical guidelines, and 2) iteratively developed clinical evidence with end users are more likely to design models that are indeed interpretable to end users. In this dissertation, we first address how to design interpretable ML in medical image analysis that affords interpretability with these two different levels of clinical evidence. We further highly recommend formative user research as the first step of the interpretable model design to understand user needs and domain requirements. We also indicate the importance of empirical user evaluation to support transparent ML design choices to facilitate the adoption of human-centered design principles. All these aspects in this dissertation increase the likelihood that the algorithms afford interpretability and enable stakeholders to capitalize on the benefits of interpretable ML. In detail, we first propose neural symbolic reasoning to implement public clinical evidence into the designed models for various routinely performed clinical tasks. We utilize the routinely applied clinical taxonomy for abnormality classification in chest x-rays. We also establish a spleen injury grading system by strictly following the clinical guidelines for symbolic reasoning with the detected and segmented salient clinical features.
Then, we propose the entire interpretable pipeline for UM prognostication with cytopathology images. We first perform formative user research and found that pathologists believe cell composition is informative for UM prognostication. Thus, we build a model to analyze cell composition directly. Finally, we conduct a comprehensive user study to assess the human factors of human-machine teaming with the designed model, e.g., whether the proposed model indeed affords interpretability to pathologists. The human-centered design process is proven to be truly interpretable to pathologists for UM prognostication. All in all, this dissertation introduces a comprehensive human-centered design for interpretable ML solutions in medical image analysis that affords interpretability to end users