12 research outputs found

    Analysis of WEKA data mining algorithms Bayes net, random forest, MLP and SMO for heart disease prediction system: A case study in Iraq

    Get PDF
    Data mining is defined as a search through large amounts of data for valuable information. The association rules, grouping, clustering, prediction, sequence modeling is some essential and most general strategies for data extraction. The processing of data plays a major role in the healthcare industry's disease detection. A variety of disease evaluations should be required to diagnose the patient. However, using data mining strategies, the number of examinations should be decreased. This decreased examination plays a crucial role in terms of time and results. Heart disease is a death-provoking disorder. In this recent instance, health issues are immense because of the availability of health issues and the grouping of various situations. Today, secret information is important in the healthcare industry to make decisions. For the prediction of cardiovascular problems, (Weka 3.8.3) tools for this analysis are used for the prediction of data extraction algorithms like sequential minimal optimization (SMO), multilayer perceptron (MLP), random forest and Bayes net. The data collected combine the prediction accuracy results, the receiver operating characteristic (ROC) curve, and the PRC value. The performance of Bayes net (94.5%) and random forest (94%) technologies indicates optimum performance rather than the sequential minimal optimization (SMO) and multilayer perceptron (MLP) methods

    Proposta de um sistema de monitoramento e diagnóstico de eletrocardiograma portátil.

    Get PDF
    Este trabalho apresenta uma proposta de processamento de sinais que é demonstrado na implementação de um sistema de diagnóstico de cardiopatias. O sistema recebe os sinais de derivações de um eletrocardiograma, fornecido por um banco de dados. Estes sinais são filtrados e processados de tal forma que na saída deste sistema é gerado o possível diagnóstico do paciente em análise. Para o funcionamento deste sistema, o método adotado se baseia no agrupamento nebuloso, para reduzir a quantidade de amostras de dados a serem processadas, visando facilitar o processamento e reduzir o hardware dedicado para este fim. O agrupamento nebuloso permite extrair as principais características, ou grupos, de um sinal de entrada, e através destas características e das funções de pertinência, que relacionam os grupos, fornece regras que podem ser utilizadas em vários tipos de aplicações de controle e tomadas de decisão. Este trabalho demonstra a possibilidade de utilização do agrupamento nebuloso para gerar os grupos, que representam as características principais de um sinal de eletrocardiograma, permitindo assim um diagnóstico de prováveis doenças cardíacas. Para a geração do diagnóstico da cardiopatia, foi utilizada também a correlação para comparar os grupos de um sinal de um eletrocardiograma desconhecido com os grupos de um sinal de eletrocardiograma de um banco de dados de diagnóstico conhecido. A comparação que obtiver maior valor de correlação será reconhecida como o possível diagnóstico do paciente. Este trabalho apresenta o resultado da aplicação do agrupamento nebuloso para a redução das amostras a serem processadas, através de simulações de um sistema criado para comprovar o funcionamento da técnica citada e também o resultado da implementação deste sistema em uma FPGA. O reduzido número de amostras do processo de agrupamento torna o processamento mais simples e simplifica a implementação do hardware. De acordo com os testes, o método obteve 85% de diagnósticos corretos

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    Assessment and Control of a Cavitation-Enabled Therapy for Minimally Invasive Myocardial Reduction

    Full text link
    Hypertrophic cardiomyopathy (HCM), which occurs in 1/500 individuals globally can lead to sudden death in adults without prior symptoms. Echocardiography is commonly used to diagnose hypertrophic cardiomyopathy. Current treatment involves invasive and high-risk procedures such as surgery or catheter-based ablation of spetum to potentially prevent left ventricular outflow tract obstruction. A novel technique, called Myocardial Cavitation-Enabled Therapy (MCET), has been proposed as a means to achieve minimally invasive myocardial reduction, i.e. heart tissue ablation. MCET aims to target hypertrophic heart muscle over time without substantial tissue scarring. The treatment employs contrast echocardiography at higher than diagnostic pressure amplitudes to produce scattered microlesions (clusters of dead cells) by cavitating contrast agent microbubbles. The assessment and control of MCET is explored in three different contexts as follows: A computer-aided 3-D quantitative evaluation scheme, for acute studies, is developed to characterize macrolesions (targeted region for treatment) based on histology sections, including lesion size and lesion density. The characterization is based on brightfield and fluorescence histological images as available in acute preclinical studies. The radially symmetric model employed to characterize macrolesion density is feasible for the study using a single focused beam to perform treatment. This methodology provides a volume-oriented, quantity- sensitive therapy evaluation. Results from parametric studies of MCET demonstrate that the quantitative scoring scheme reduces visual scoring ambiguity, overcomes the limitation of traditional visual scoring and works for cases with a large histologically identified lesion count, i.e. has an appropriate dynamic range for evaluating therapeutic applications. The presented results presented here have shown that MCET-induced macrolesions grow radially as the acoustic pressure amplitude increases. Using a swept beam as a new method seems to be able to shorten treatment time. MCET shows great potential as a minimally-invasive myocardial tissue reduction therapy after long-term healing. Chronic studies of 6-week show the maturation of MCET induced microlesions with quantitative results of the fibrotic tissue fraction. And the tissue reduction in a HCM rat model is demonstrated by showing and heart muscle wall shrinkage by about 16%, which is a therapeutically useful magnitude. MCET method can be improved by addition of adjuvant treatment with steroids and hypertension medication to help fibrosis reduction. Further development and refinement in larger animals of MCET treatment for HCM should fill the need for a new clinical treatment option. Feasibility of detection, quantification and localization for microbubble cavitation is investigated for treatment monitoring and controls. A passive cavitation imaging algorithm and variations of this algorithm provide spatial information on the extent of cavitation events. Cavitation sites can be localized with reasonable spatial resolution. The described passive imaging algorithm applies to both systems: Verasonics (an ultrasound research platform) alone transmitting high intensity focused ultrasound (HIFU) and receiving signals with cavitation signatures, and Verasonics only for passive receiving with another HIFU system for therapeutic exposure. The overall therapy-monitoring scheme is able to adequately delineate the spatial location of triggered microbubble dynamics for real-time monitoring of monitoring of microlesion accumulation.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136970/1/zhuyiy_1.pd

    Elicitation of relevant information from medical databases: application to the encoding of secondary diagnoses

    Get PDF
    Dans cette thèse, nous nous concentrons sur le codage du séjour d'hospitalisation en codes standards. Ce codage est une tâche médicale hautement sensible dans les hôpitaux français, nécessitant des détails minutieux et une haute précision, car le revenu de l'hôpital en dépend directement. L'encodage du séjour d'hospitalisation comprend l'encodage du diagnostic principal qui motive le séjour d'hospitalisation et d'autres diagnostics secondaires qui surviennent pendant le séjour. Nous proposons une analyse rétrospective mettant en oeuvre des méthodes d'apprentissage, sur la tâche d'encodage de certains diagnostics secondaires sélectionnés. Par conséquent, la base de données PMSI, une grande base de données médicales qui documente toutes les informations sur les séjours d'hospitalisation en France.} est analysée afin d'extraire à partir de séjours de patients hospitalisés antérieurement, des variables décisives (Features). Identifier ces variables permet de pronostiquer le codage d'un diagnostic secondaire difficile qui a eu lieu avec un diagnostic principal fréquent. Ainsi, à la fin d'une session de codage, nous proposons une aide pour les codeurs en proposant une liste des encodages pertinents ainsi que des variables utilisées pour prédire ces encodages. Les défis nécessitent une connaissance métier dans le domaine médical et une méthodologie d'exploitation efficace de la base de données médicales par les méthodes d'apprentissage automatique. En ce qui concerne le défi lié à la connaissance du domaine médical, nous collaborons avec des codeurs experts dans un hôpital local afin de fournir un aperçu expert sur certains diagnostics secondaires difficiles à coder et afin d'évaluer les résultats de la méthodologie proposée. En ce qui concerne le défi lié à l'exploitation des bases de données médicales par des méthodes d'apprentissage automatique, plus spécifiquement par des méthodes de "Feature Selection" (FS), nous nous concentrons sur la résolution de certains points : le format des bases de données médicales, le nombre de variables dans les bases de données médicales et les variables instables extraites des bases de données médicales. Nous proposons une série de transformations afin de rendre le format de la base de données médicales, en général sous forme de bases de données relationnelles, exploitable par toutes les méthodes de type FS. Pour limiter l'explosion du nombre de variables représentées dans la base de données médicales, généralement motivée par la quantité de diagnostics et d'actes médicaux, nous analysons l'impact d'un regroupement de ces variables dans un niveau de représentation approprié et nous choisissons le meilleur niveau de représentation. Enfin, les bases de données médicales sont souvent déséquilibrées à cause de la répartition inégale des exemples positifs et négatifs. Cette répartition inégale cause des instabilités de variables extraites par des méthodes de FS. Pour résoudre ce problème, nous proposons une méthodologie d'extraction des variables stables en échantillonnant plusieurs fois l'ensemble de données et en extrayant les variables pertinentes de chaque ensemble de données échantillonné. Nous évaluons la méthodologie en établissant un modèle de classification qui prédit les diagnostics étudiés à partir des variables extraites. La performance du modèle de classification indique la qualité des variables extraites, car les variables de bonne qualité produisent un bon modèle de classification. Deux échelles de base de données PMSI sont utilisées: échelle locale et régionale. Le modèle de classification est construit en utilisant l'échelle locale de PMSI et testé en utilisant des échelles locales et régionales. Les évaluations ont montré que les variables extraites sont de bonnes variables pour coder des diagnostics secondaires. Par conséquent, nous proposons d'appliquer notre méthodologie pour éviter de manquer des encodages importants qui affectent le budget de l'hôpital en fournissant aux codeurs les encodages potentiels des diagnostics secondaires ainsi que les variables qui conduisent à ce codage.In the thesis we focus on encoding inpatient episode into standard codes, a highly sensitive medical task in French hospitals, requiring minute detail and accuracy, since the hospital's income directly depends on it. Encoding inpatient episode includes encoding the primary diagnosis that motivates the hospitalisation stay and other secondary diagnoses that occur during the stay. Unlike primary diagnosis, encoding secondary diagnoses is prone to human error, due to the difficulty of collecting relevant data from different medical sources, or to the outright absence of relevant data that helps encoding the diagnosis. We propose a retrospective analysis on the encoding task of some selected secondary diagnoses. Hence, the PMSI database is analysed in order to extract, from previously encoded inpatient episodes, the decisive features to encode a difficult secondary diagnosis occurred with frequent primary diagnosis. Consequently, at the end of an encoding session, once all the features are available, we propose to help the coders by proposing a list of relevant encodings as well as the features used to predict these encodings. Nonetheless, a set of challenges need to be addressed for the development of an efficient encoding help system. The challenges include, an expert knowledge in the medical domain and an efficient exploitation methodology of the medical database by Machine Learning methods. With respect to the medical domain knowledge challenge, we collaborate with expert coders in a local hospital in order to provide expert insight on some difficult secondary diagnoses to encode and in order to evaluate the results of the proposed methodology. With respect to the medical databases exploitation challenge, we use ML methods such as Feature Selection (FS), focusing on resolving several issues such as the incompatible format of the medical databases, the excessive number features of the medical databases in addition to the unstable features extracted from the medical databases. Regarding to issue of the incompatible format of the medical databases caused by relational databases, we propose a series of transformation in order to make the database and its features more exploitable by any FS methods. To limit the effect of the excessive number of features in the medical database, usually motivated by the amount of the diagnoses and the medical procedures, we propose to group the excessive number features into a proper representation level and to study the best representation level. Regarding to issue of unstable features extracted from medical databases, as the dataset linked with diagnoses are highly imbalanced due to classification categories that are unequally represented, most existing FS methods tend not to perform well on them even if sampling strategies are used. We propose a methodology to extract stable features by sampling the dataset multiple times and extracting the relevant features from each sampled dataset. Thus, we propose a methodology that resolves these issues and extracts stable set of features from medical database regardless to the sampling method and the FS method used in the methodology. Lastly, we evaluate the methodology by building a classification model that predicts the studied diagnoses out of the extracted features. The performance of the classification model indicates the quality of the extracted features, since good quality features produces good classification model. Two scales of PMSI database are used: local and regional scales. The classification model is built using the local scale of PMSI and tested out using both local and regional scales. Hence, we propose applying our methodology to increase the integrity of the encoded diagnoses and to prevent missing important encodings. We propose modifying the encoding process and providing the coders with the potential encodings of the secondary diagnoses as well as the features that lead to this encoding

    Klassifikation morphologischer und pathologischer Strukturen in koronaren Gefäßen auf Basis intravaskulärer Ultraschallaufnahmen zur klinischen Anwendung in einem IVB-System

    Get PDF
    Erkrankungen des Herz-Kreislaufsystems sind in Deutschland für fast 50% der Todesfälle verantwortlich. Insbesondere die Arteriosklerose (vulgo: „Arterienverkalkung“) ist dabei ein dominierendes Krankheitsbild. So ist es auch nicht verwunderlich, dass die Arteriosklerose seit den Anfängen der wissenschaftlichen Medizin ein Feld für umfangreiche Untersuchungen gewesen ist. Speziell durch den technischen Fortschritt bildgebender Verfahren war es möglich neuartige Diagnose- und Therapiemethoden zu entwickeln. Dabei hat sich gerade der intravaskuläre Ultraschall zu einem Goldstandard in der Diagnose arteriosklerotischer Erkrankungen und, in Kombination mit der intravaskulären Brachytherapie, zu einer Erfolg versprechenden Basistechnik für therapeutische Maßnahmen entwickelt. Grundvoraussetzung fast jeder bildbasierten Intervention ist aber die Separierung der Bilddaten in anatomisch und pathologisch differenzierte, saliente Regionen. In Anbetracht zunehmender, umfangreicherer Datenmengen kann eine derartige Aufarbeitung nur rechnergestützt durch Problem adaptierte Klassifikationsalgorithmen gewährleistet werden. Daher war es das Ziel dieser Arbeit, neue Methoden zur Merkmalsextraktion und Algorithmen zur Klassifikation morphologischer und pathologischer Strukturen in koronaren Gefäßen bereitzustellen. Aus der initialen Fragestellung wurde zudem zeitnah deutlich, dass das Forschungsvorhaben Anknüpfungspunkte zu weiteren hochgradig relevanten inter- und intradisziplinären Forschungsthemen, beispielsweise der Histologie, Systembiologie oder Chemietechnik, aufweist. Aber auch vonseiten der Anwendungsszenarien wurden teilweise völlig neue, innovative Wege beschritten. Exemplarisch sei ein E-Learning-Ansatz zur „Übersetzung“ digitaler Bilddaten in haptisch erfahrbare Reliefs für blinde und sehbehinderte Schülerinnen und Schüler genannt. In Anbetracht dieser partiell divergierenden Sichtweisen war auch die generalisierte, von der expliziten Fragestellung abstrahierte Umsetzung eine Ausrichtung der Arbeit. Dieser Intention folgend wurden drei wesentliche methodische und konzeptionelle Entwicklungen innerhalb der Arbeit realisiert: ein Expertensystem zur Approximation arterieller Kompartimente mittels unscharfer elliptischer Templates, ein neuartiger, effizienter Ansatz zur signaltheoretischen Extraktion textureller Merkmale und die Etablierung maschinelle Lernverfahren unter Integration von a priori Wissen. Über eine konsequente Integration statistischer Gütemaße konnte zudem eine ausgeprägte Rückkopplung zwischen Klassifikations- und Bewertungsansätzen gewährleistet werden. Gemeinsam ist allen Ansätzen das Ansinnen, trotz hoch anwendungsbezogener Umsetzungen, die fortwährende Portabilität zu beachten. In einer übergeordneten Abstraktion kann die Intention der Arbeit somit auch in der „generalisierten Nutzung signaltheoretischer Merkmale zur Klassifikation heterogener, durch texturelle Ausprägungen zu differenzierende Kompartimente mittels maschineller Lernverfahren“ verstanden werden

    Quantifying atherosclerosis in vasculature using ultrasound imaging

    Get PDF
    Cerebrovascular disease accounts for approximately 30% of the global burden associated with cardiovascular diseases [1]. According to the World Stroke Organisation, there are approximately 13.7 million new stroke cases annually, and just under six million people will die from stroke each year [2]. The underlying cause of this disease is atherosclerosis – a vascular pathology which is characterised by thickening and hardening of blood vessel walls. When fatty substances such as cholesterol accumulate on the inner linings of an artery, they cause a progressive narrowing of the lumen referred to as a stenosis. Localisation and grading of the severity of a stenosis, is important for practitioners to assess the risk of rupture which leads to stroke. Ultrasound imaging is popular for this purpose. It is low cost, non-invasive, and permits a quick assessment of vessel geometry and stenosis by measuring the intima media thickness. Research is showing that 3D monitoring of plaque progression may provide a better indication of sites which are at risk of rupture. Various metrics have been proposed. From these, the quantification of plaques by measuring vessel wall volume (VWV) using the segmented media-adventitia boundaries (MAB) and lumen-intima boundaries (LIB) has been shown to be sensitive to temporal changes in carotid plaque burden. Thus, methods to segment these boundaries are required to help generate VWV measurements with high accuracy, less user interaction and increased robustness to variability in di↵erent user acquisition protocols.ii This work proposes three novel methods to address these requirements, to ultimately produce a highly accurate, fully automated segmentation algorithm which works on intensity-invariant data. The first method proposed was that of generating a novel, intensity-invariant representation of ultrasound data by creating phase-congruency maps from raw unprocessed radio-frequency ultrasound information. Experiments carried out showed that this representation retained the necessary anatomical structural information to facilitate segmentation, while concurrently being invariant to changes in amplitude from the user. The second method proposed was the novel application of Deep Convolutional Networks (DCN) to carotid ultrasound images to achieve fully automatic delineation of the MAB boundaries, in addition to the use of a novel fusion of amplitude and phase congruency data as an image source. Experiments carried out showed that the DCN produces highly accurate and automated results, and that the fusion of amplitude and phase yield superior results to either one alone. The third method proposed was a new geometrically constrained objective function for the network's Stochastic Gradient Descent optimisation, thus tuning it to the segmentation problem at hand, while also developing the network further to concurrently delineate both the MAB and LIB to produce vessel wall contours. Experiments carried out here also show that the novel geometric constraints improve the segmentation results on both MAB and LIB contours. In conclusion, the presented work provides significant novel contributions to field of Carotid Ultrasound segmentation, and with future work, this could lead to implementations which facilitate plaque progression analysis for the end�user

    Collected Papers (on Neutrosophics, Plithogenics, Hypersoft Set, Hypergraphs, and other topics), Volume X

    Get PDF
    This tenth volume of Collected Papers includes 86 papers in English and Spanish languages comprising 972 pages, written between 2014-2022 by the author alone or in collaboration with the following 105 co-authors (alphabetically ordered) from 26 countries: Abu Sufian, Ali Hassan, Ali Safaa Sadiq, Anirudha Ghosh, Assia Bakali, Atiqe Ur Rahman, Laura Bogdan, Willem K.M. Brauers, Erick González Caballero, Fausto Cavallaro, Gavrilă Calefariu, T. Chalapathi, Victor Christianto, Mihaela Colhon, Sergiu Boris Cononovici, Mamoni Dhar, Irfan Deli, Rebeca Escobar-Jara, Alexandru Gal, N. Gandotra, Sudipta Gayen, Vassilis C. Gerogiannis, Noel Batista Hernández, Hongnian Yu, Hongbo Wang, Mihaiela Iliescu, F. Nirmala Irudayam, Sripati Jha, Darjan Karabašević, T. Katican, Bakhtawar Ali Khan, Hina Khan, Volodymyr Krasnoholovets, R. Kiran Kumar, Manoranjan Kumar Singh, Ranjan Kumar, M. Lathamaheswari, Yasar Mahmood, Nivetha Martin, Adrian Mărgean, Octavian Melinte, Mingcong Deng, Marcel Migdalovici, Monika Moga, Sana Moin, Mohamed Abdel-Basset, Mohamed Elhoseny, Rehab Mohamed, Mohamed Talea, Kalyan Mondal, Muhammad Aslam, Muhammad Aslam Malik, Muhammad Ihsan, Muhammad Naveed Jafar, Muhammad Rayees Ahmad, Muhammad Saeed, Muhammad Saqlain, Muhammad Shabir, Mujahid Abbas, Mumtaz Ali, Radu I. Munteanu, Ghulam Murtaza, Munazza Naz, Tahsin Oner, ‪Gabrijela Popović‬‬‬‬‬, Surapati Pramanik, R. Priya, S.P. Priyadharshini, Midha Qayyum, Quang-Thinh Bui, Shazia Rana, Akbara Rezaei, Jesús Estupiñán Ricardo, Rıdvan Sahin, Saeeda Mirvakili, Said Broumi, A. A. Salama, Flavius Aurelian Sârbu, Ganeshsree Selvachandran, Javid Shabbir, Shio Gai Quek, Son Hoang Le, Florentin Smarandache, Dragiša Stanujkić, S. Sudha, Taha Yasin Ozturk, Zaigham Tahir, The Houw Iong, Ayse Topal, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Rizha Vitania, Luige Vlădăreanu, Victor Vlădăreanu, Ștefan Vlăduțescu, J. Vimala, Dan Valeriu Voinea, Adem Yolcu, Yongfei Feng, Abd El-Nasser H. Zaied, Edmundas Kazimieras Zavadskas.‬
    corecore