1,679 research outputs found

    Applying Machine Learning Algorithms for the Analysis of Biological Sequences and Medical Records

    Get PDF
    The modern sequencing technology revolutionizes the genomic research and triggers explosive growth of DNA, RNA, and protein sequences. How to infer the structure and function from biological sequences is a fundamentally important task in genomics and proteomics fields. With the development of statistical and machine learning methods, an integrated and user-friendly tool containing the state-of-the-art data mining methods are needed. Here, we propose SeqFea-Learn, a comprehensive Python pipeline that integrating multiple steps: feature extraction, dimensionality reduction, feature selection, predicting model constructions based on machine learning and deep learning approaches to analyze sequences. We used enhancers, RNA N6- methyladenosine sites and protein-protein interactions datasets to evaluate the validation of the tool. The results show that the tool can effectively perform biological sequence analysis and classification tasks. Applying machine learning algorithms for Electronic medical record (EMR) data analysis is also included in this dissertation. Chronic kidney disease (CKD) is prevalent across the world and well defined by an estimated glomerular filtration rate (eGFR). The progression of kidney disease can be predicted if future eGFR can be accurately estimated using predictive analytics. Thus, I present a prediction model of eGFR that was built using Random Forest regression. The dataset includes demographic, clinical and laboratory information from a regional primary health care clinic. The final model included eGFR, age, gender, body mass index (BMI), obesity, hypertension, and diabetes, which achieved a mean coefficient of determination of 0.95. The estimated eGFRs were used to classify patients into CKD stages with high macro-averaged and micro-averaged metrics

    Monitoring thermal ablation via microwave tomography. An ex vivo experimental assessment

    Get PDF
    Thermal ablation treatments are gaining a lot of attention in the clinics thanks to their reduced invasiveness and their capability of treating non-surgical patients. The effectiveness of these treatments and their impact in the hospital's routine would significantly increase if paired with a monitoring technique able to control the evolution of the treated area in real-time. This is particularly relevant in microwave thermal ablation, wherein the capability of treating larger tumors in a shorter time needs proper monitoring. Current diagnostic imaging techniques do not provide effective solutions to this issue for a number of reasons, including economical sustainability and safety. Hence, the development of alternative modalities is of interest. Microwave tomography, which aims at imaging the electromagnetic properties of a target under test, has been recently proposed for this scope, given the significant temperature-dependent changes of the dielectric properties of human tissues induced by thermal ablation. In this paper, the outcomes of the first ex vivo experimental study, performed to assess the expected potentialities of microwave tomography, are presented. The paper describes the validation study dealing with the imaging of the changes occurring in thermal ablation treatments. The experimental test was carried out on two ex vivo bovine liver samples and the reported results show the capability of microwave tomography of imaging the transition between ablated and untreated tissue. Moreover, the discussion section provides some guidelines to follow in order to improve the achievable performances

    The role of the hippocampus in generalizing configural relationships

    Get PDF
    The hippocampus has been implicated in integrating information across separate events in support of mnemonic generalizations. These generalizations may be underpinned by processes at both encoding (linking similar information across events) and retrieval (โ€œon-the-flyโ€ generalization). However, the relative contribution of the hippocampus to encoding- and retrieval-based generalizations is poorly understood. Using fMRI in humans, we investigated the hippocampal role in gradually learning a set of spatial discriminations and subsequently generalizing them in an acquired equivalence task. We found a highly significant correlation between individualsโ€™ performance on a generalization test and hippocampal activity during the test, providing evidence that hippocampal processes support on-the-fly generalizations at retrieval. Within the same hippocampal region there was also a correlation between activity during the final stage of learning (when all associations had been learnt but no generalization was required) and subsequent generalization performance. We suggest that the hippocampus spontaneously retrieves prior events that share overlapping features with the current event. This process may also support the creation of generalized representations during encoding. These findings are supportive of the view that the hippocampus contributes to both encoding- and retrieval-based generalization via the same basic mechanism; retrieval of similar events sharing common features

    ๋”ฅ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ๋ฅผ ํ™œ์šฉํ•œ ์˜ํ•™ ๊ฐœ๋… ๋ฐ ํ™˜์ž ํ‘œํ˜„ ํ•™์Šต๊ณผ ์˜๋ฃŒ ๋ฌธ์ œ์—์˜ ์‘์šฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022. 8. ์ •๊ต๋ฏผ.๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์€ ์ „๊ตญ๋ฏผ ์˜๋ฃŒ ๋ณดํ—˜๋ฐ์ดํ„ฐ์ธ ํ‘œ๋ณธ์ฝ”ํ˜ธํŠธDB๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋”ฅ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜์˜ ์˜ํ•™ ๊ฐœ๋… ๋ฐ ํ™˜์ž ํ‘œํ˜„ ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ ์˜๋ฃŒ ๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋จผ์ € ์ˆœ์ฐจ์ ์ธ ํ™˜์ž ์˜๋ฃŒ ๊ธฐ๋ก๊ณผ ๊ฐœ์ธ ํ”„๋กœํŒŒ์ผ ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ™˜์ž ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ณ  ํ–ฅํ›„ ์งˆ๋ณ‘ ์ง„๋‹จ ๊ฐ€๋Šฅ์„ฑ์„ ์˜ˆ์ธกํ•˜๋Š” ์žฌ๊ท€์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ๋‹ค์–‘ํ•œ ์„ฑ๊ฒฉ์˜ ํ™˜์ž ์ •๋ณด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ํ˜ผํ•ฉํ•˜๋Š” ๊ตฌ์กฐ๋ฅผ ๋„์ž…ํ•˜์—ฌ ํฐ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ์–ป์—ˆ๋‹ค. ๋˜ํ•œ ํ™˜์ž์˜ ์˜๋ฃŒ ๊ธฐ๋ก์„ ์ด๋ฃจ๋Š” ์˜๋ฃŒ ์ฝ”๋“œ๋“ค์„ ๋ถ„์‚ฐ ํ‘œํ˜„์œผ๋กœ ๋‚˜ํƒ€๋‚ด ์ถ”๊ฐ€ ์„ฑ๋Šฅ ๊ฐœ์„ ์„ ์ด๋ฃจ์—ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์˜๋ฃŒ ์ฝ”๋“œ์˜ ๋ถ„์‚ฐ ํ‘œํ˜„์ด ์ค‘์š”ํ•œ ์‹œ๊ฐ„์  ์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๊ณ , ์ด์–ด์ง€๋Š” ์—ฐ๊ตฌ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ์‹œ๊ฐ„์  ์ •๋ณด๊ฐ€ ๊ฐ•ํ™”๋  ์ˆ˜ ์žˆ๋„๋ก ๊ทธ๋ž˜ํ”„ ๊ตฌ์กฐ๋ฅผ ๋„์ž…ํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ์˜๋ฃŒ ์ฝ”๋“œ์˜ ๋ถ„์‚ฐ ํ‘œํ˜„ ๊ฐ„์˜ ์œ ์‚ฌ๋„์™€ ํ†ต๊ณ„์  ์ •๋ณด๋ฅผ ๊ฐ€์ง€๊ณ  ๊ทธ๋ž˜ํ”„๋ฅผ ๊ตฌ์ถ•ํ•˜์˜€๊ณ  ๊ทธ๋ž˜ํ”„ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ๋ฅผ ํ™œ์šฉ, ์‹œ๊ฐ„/ํ†ต๊ณ„์  ์ •๋ณด๊ฐ€ ๊ฐ•ํ™”๋œ ์˜๋ฃŒ ์ฝ”๋“œ์˜ ํ‘œํ˜„ ๋ฒกํ„ฐ๋ฅผ ์–ป์—ˆ๋‹ค. ํš๋“ํ•œ ์˜๋ฃŒ ์ฝ”๋“œ ๋ฒกํ„ฐ๋ฅผ ํ†ตํ•ด ์‹œํŒ ์•ฝ๋ฌผ์˜ ์ž ์žฌ์ ์ธ ๋ถ€์ž‘์šฉ ์‹ ํ˜ธ๋ฅผ ํƒ์ง€ํ•˜๋Š” ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ ๊ฒฐ๊ณผ, ๊ธฐ์กด์˜ ๋ถ€์ž‘์šฉ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์— ์กด์žฌํ•˜์ง€ ์•Š๋Š” ์‚ฌ๋ก€๊นŒ์ง€๋„ ์˜ˆ์ธกํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ถ„๋Ÿ‰์— ๋น„ํ•ด ์ฃผ์š” ์ •๋ณด๊ฐ€ ํฌ์†Œํ•˜๋‹ค๋Š” ์˜๋ฃŒ ๊ธฐ๋ก์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์ง€์‹๊ทธ๋ž˜ํ”„๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์‚ฌ์ „ ์˜ํ•™ ์ง€์‹์„ ๋ณด๊ฐ•ํ•˜์˜€๋‹ค. ์ด๋•Œ ํ™˜์ž์˜ ์˜๋ฃŒ ๊ธฐ๋ก์„ ๊ตฌ์„ฑํ•˜๋Š” ์ง€์‹๊ทธ๋ž˜ํ”„์˜ ๋ถ€๋ถ„๋งŒ์„ ์ถ”์ถœํ•˜์—ฌ ๊ฐœ์ธํ™”๋œ ์ง€์‹๊ทธ๋ž˜ํ”„๋ฅผ ๋งŒ๋“ค๊ณ  ๊ทธ๋ž˜ํ”„ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ๋ฅผ ํ†ตํ•ด ๊ทธ๋ž˜ํ”„์˜ ํ‘œํ˜„ ๋ฒกํ„ฐ๋ฅผ ํš๋“ํ•˜์˜€๋‹ค. ์ตœ์ข…์ ์œผ๋กœ ์ˆœ์ฐจ์ ์ธ ์˜๋ฃŒ ๊ธฐ๋ก์„ ํ•จ์ถ•ํ•œ ํ™˜์ž ํ‘œํ˜„๊ณผ ๋”๋ถˆ์–ด ๊ฐœ์ธํ™”๋œ ์˜ํ•™ ์ง€์‹์„ ํ•จ์ถ•ํ•œ ํ‘œํ˜„์„ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์—ฌ ํ–ฅํ›„ ์งˆ๋ณ‘ ๋ฐ ์ง„๋‹จ ์˜ˆ์ธก ๋ฌธ์ œ์— ํ™œ์šฉํ•˜์˜€๋‹ค.This dissertation proposes a deep neural network-based medical concept and patient representation learning methods using medical claims data to solve two healthcare tasks, i.e., clinical outcome prediction and post-marketing adverse drug reaction (ADR) signal detection. First, we propose SAF-RNN, a Recurrent Neural Network (RNN)-based model that learns a deep patient representation based on the clinical sequences and patient characteristics. Our proposed model fuses different types of patient records using feature-based gating and self-attention. We demonstrate that high-level associations between two heterogeneous records are effectively extracted by our model, thus achieving state-of-the-art performances for predicting the risk probability of cardiovascular disease. Secondly, based on the observation that the distributed medical code embeddings represent temporal proximity between the medical codes, we introduce a graph structure to enhance the code embeddings with such temporal information. We construct a graph using the distributed code embeddings and the statistical information from the claims data. We then propose the Graph Neural Network(GNN)-based representation learning for post-marketing ADR detection. Our model shows competitive performances and provides valid ADR candidates. Finally, rather than using patient records alone, we utilize a knowledge graph to augment the patient representation with prior medical knowledge. Using SAF-RNN and GNN, the deep patient representation is learned from the clinical sequences and the personalized medical knowledge. It is then used to predict clinical outcomes, i.e., next diagnosis prediction and CVD risk prediction, resulting in state-of-the-art performances.1 Introduction 1 2 Background 8 2.1 Medical Concept Embedding 8 2.2 Encoding Sequential Information in Clinical Records 11 3 Deep Patient Representation with Heterogeneous Information 14 3.1 Related Work 16 3.2 Problem Statement 19 3.3 Method 20 3.3.1 RNN-based Disease Prediction Model 20 3.3.2 Self-Attentive Fusion (SAF) Encoder 23 3.4 Dataset and Experimental Setup 24 3.4.1 Dataset 24 3.4.2 Experimental Design 26 ii 3.4.3 Implementation Details 27 3.5 Experimental Results 28 3.5.1 Evaluation of CVD Prediction 28 3.5.2 Sensitivity Analysis 28 3.5.3 Ablation Studies 31 3.6 Further Investigation 32 3.6.1 Case Study: Patient-Centered Analysis 32 3.6.2 Data-Driven CVD Risk Factors 32 3.7 Conclusion 33 4 Graph-Enhanced Medical Concept Embedding 40 4.1 Related Work 42 4.2 Problem Statement 43 4.3 Method 44 4.3.1 Code Embedding Learning with Skip-gram Model 44 4.3.2 Drug-disease Graph Construction 45 4.3.3 A GNN-based Method for Learning Graph Structure 47 4.4 Dataset and Experimental Setup 49 4.4.1 Dataset 49 4.4.2 Experimental Design 50 4.4.3 Implementation Details 52 4.5 Experimental Results 53 4.5.1 Evaluation of ADR Detection 53 4.5.2 Newly-Described ADR Candidates 54 4.6 Conclusion 55 5 Knowledge-Augmented Deep Patient Representation 57 5.1 Related Work 60 5.1.1 Incorporating Prior Medical Knowledge for Clinical Outcome Prediction 60 5.1.2 Inductive KGC based on Subgraph Learning 61 5.2 Method 61 5.2.1 Extracting Personalized KG 61 5.2.2 KA-SAF: Knowledge-Augmented Self-Attentive Fusion Encoder 64 5.2.3 KGC as a Pre-training Task 68 5.2.4 Subgraph Infomax: SGI 69 5.3 Dataset and Experimental Setup 72 5.3.1 Clinical Outcome Prediction 72 5.3.2 Next Diagnosis Prediction 72 5.4 Experimental Results 73 5.4.1 Cardiovascular Disease Prediction 73 5.4.2 Next Diagnosis Prediction 73 5.4.3 KGC on SemMed KG 73 5.5 Conclusion 74 6 Conclusion 77 Abstract (In Korean) 90 Acknowlegement 92๋ฐ•

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be โ€˜team scienceโ€™.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    Analysis of Dimensionality Reduction Techniques on Big Data

    Get PDF
    Due to digitization, a huge volume of data is being generated across several sectors such as healthcare, production, sales, IoT devices, Web, organizations. Machine learning algorithms are used to uncover patterns among the attributes of this data. Hence, they can be used to make predictions that can be used by medical practitioners and people at managerial level to make executive decisions. Not all the attributes in the datasets generated are important for training the machine learning algorithms. Some attributes might be irrelevant and some might not affect the outcome of the prediction. Ignoring or removing these irrelevant or less important attributes reduces the burden on machine learning algorithms. In this work two of the prominent dimensionality reduction techniques, Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are investigated on four popular Machine Learning (ML) algorithms, Decision Tree Induction, Support Vector Machine (SVM), Naive Bayes Classifier and Random Forest Classifier using publicly available Cardiotocography (CTG) dataset from University of California and Irvine Machine Learning Repository. The experimentation results prove that PCA outperforms LDA in all the measures. Also, the performance of the classifiers, Decision Tree, Random Forest examined is not affected much by using PCA and LDA.To further analyze the performance of PCA and LDA the eperimentation is carried out on Diabetic Retinopathy (DR) and Intrusion Detection System (IDS) datasets. Experimentation results prove that ML algorithms with PCA produce better results when dimensionality of the datasets is high. When dimensionality of datasets is low it is observed that the ML algorithms without dimensionality reduction yields better results
    • โ€ฆ
    corecore