8 research outputs found

    A novel two-stage heart arrhythmia ensemble classifier

    Get PDF
    Atrial fibrillation (AF) and ventricular arrhythmia (Arr) are among the most common and fatal cardiac arrhythmias in the world. Electrocardiogram (ECG) data, collected as part of the UK Biobank, represents an opportunity for analysis and classification of these two diseases in the UK. The main objective of our study is to investigate a two-stage model for the classification of individuals with AF and Arr in the UK Biobank dataset. The current literature addresses heart arrhythmia classification very extensively. However, the data used by most researchers lack enough instances of these common diseases. Moreover, by proposing the two-stage model and separation of normal and abnormal cases, we have improved the performance of the classifiers in detection of each specific disease. Our approach consists of two stages of classification. In the first stage, features of the ECG input are classified into two main classes: normal and abnormal. At the second stage, the features of the ECG are further categorised as abnormal and further classified into two diseases of AF and Arr. A diverse set of ECG features such as the QRS duration, PR interval and RR interval, as well as covariates such as sex, BMI, age and other factors, are used in the modelling process. For both stages, we use the XGBoost Classifier algorithm. The healthy population present in the data, has been undersampled to tackle the class imbalance present in the data. This technique has been applied and evaluated using an ECG dataset from the UKBioBank ECG taken at rest repository. The main results of our paper are as follows: The classification performance for the proposed approach has been measured using F1 score, Sensitivity (Recall) and Specificity (Precision). The results of the proposed system are 87.22%, 88.55% and 85.95%, for average F1 Score, average sensitivity and average specificity, respectively. Contribution and significance: The performance level indicates that automatic detection of AF and Arr in participants present in the UK Biobank is more precise and efficient if done in a two-stage manner. Automatic detection and classification of AF and Arr individuals this way would mean early diagnosis and prevention of more serious consequences later in their lives

    Research on Personalized Learning Resource Recommendation Based on Knowledge Graph Technology

    Get PDF
    In the face of the dilemma of learners\u27 learning loss and information overload in information resources, a personalized learning resource recommendation algorithm is proposed by conducting in-depth and extensive research on the knowledge graph. This algorithm relies on the similarity or correlation between learners\u27 characteristics and course knowledge (learning resources) for recommendation. It analyzes learners\u27 characteristics in depth from four aspects: data collection and processing, model construction, resource and path recommendation, and model application, and establishes a multi layered dynamic feature model for learners; Analyze the core elements of the curriculum knowledge graph, decompose the curriculum knowledge into nanoscale knowledge granularity, and construct a curriculum knowledge graph model. The experimental results indicate that this algorithm improves learners\u27 learning efficiency and promotes their personalized development

    Automated Arrhythmia Detection Based on RR Intervals

    Get PDF
    Abnormal heart rhythms, also known as arrhythmias, can be life-threatening. AFIB and AFL are examples of arrhythmia that affect a growing number of patients. This paper describes a method that can support clinicians during arrhythmia diagnosis. We propose a deep learning algorithm to discriminate AFIB, AFL, and NSR RR interval signals. The algorithm was designed with data from 4051 subjects. With 10-fold cross-validation, the algorithm achieved the following results: ACC = 99.98%, SEN = 100.00%, and SPE = 99.94%. These results are significant because they show that it is possible to automate arrhythmia detection in RR interval signals. Such a detection method makes economic sense because RR interval signals are cost-effective to measure, communicate, and process. Having such a cost-effective solution might lead to widespread long-term monitoring, which can help detecting arrhythmia earlier. Detection can lead to treatment, which improves outcomes for patients

    Hybrid Decision Support to Monitor Atrial Fibrillation for Stroke Prevention

    Get PDF
    In this paper, we discuss hybrid decision support to monitor atrial fibrillation for stroke prevention. Hybrid decision support takes the form of human experts and machine algorithms working cooperatively on a diagnosis. The link to stroke prevention comes from the fact that patients with Atrial Fibrillation (AF) have a fivefold increased stroke risk. Early diagnosis, which leads to adequate AF treatment, can decrease the stroke risk by 66% and thereby prevent stroke. The monitoring service is based on Heart Rate (HR) measurements. The resulting signals are communicated and stored with Internet of Things (IoT) technology. A Deep Learning (DL) algorithm automatically estimates the AF probability. Based on this technology, we can offer four distinct services to healthcare providers: (1) universal access to patient data; (2) automated AF detection and alarm; (3) physician support; and (4) feedback channels. These four services create an environment where physicians can work symbiotically with machine algorithms to establish and communicate a high quality AF diagnosis

    A Study of R-R Interval Transition Matrix Features for Machine Learning Algorithms in AFib Detection

    Get PDF
    Atrial Fibrillation (AFib) is a heart condition that occurs when electrophysiological malformations within heart tissues cause the atria to lose coordination with the ventricles, resulting in “irregularly irregular” heartbeats. Because symptoms are subtle and unpredictable, AFib diagnosis is often difficult or delayed. One possible solution is to build a system which predicts AFib based on the variability of R-R intervals (the distances between two R-peaks). This research aims to incorporate the transition matrix as a novel measure of R-R variability, while combining three segmentation schemes and two feature importance measures to systematically analyze the significance of individual features. The MIT-BIH dataset was first divided into three segmentation schemes, consisting of 5-s, 10-s, and 25-s subsets. In total, 21 various features, including the transition matrix features, were extracted from these subsets and used for the training of 11 machine learning classifiers. Next, permutation importance and tree-based feature importance calculations determined the most predictive features for each model. In summary, with Leave-One-Person-Out Cross Validation, classifiers under the 25-s segmentation scheme produced the best accuracies; specifically, Gradient Boosting (96.08%), Light Gradient Boosting (96.11%), and Extreme Gradient Boosting (96.30%). Among eleven classifiers, the three gradient boosting models and Random Forest exhibited the highest overall performance across all segmentation schemes. Moreover, the permutation and tree-based importance results demonstrated that the transition matrix features were most significant with longer subset lengths

    Advanced Information Processing Methods and Their Applications

    Get PDF
    This Special Issue has collected and presented breakthrough research on information processing methods and their applications. Particular attention is paid to the study of the mathematical foundations of information processing methods, quantum computing, artificial intelligence, digital image processing, and the use of information technologies in medicine

    A machine learning taxonomic classifier for science publications

    Get PDF
    Dissertação de mestrado integrado em Engineering and Management of Information SystemsThe evolution in scientific production, associated with the growing interdomain collaboration of knowledge and the increasing co-authorship of scientific works remains supported by processes of manual, highly subjective classification, subject to misinterpretation. The very taxonomy on which this same classification process is based is not consensual, with governmental organizations resorting to taxonomies that do not keep up with changes in scientific areas, and indexers / repositories that seek to keep up with those changes. We find a reality distinct from what is expected and that the domains where scientific work is recorded can easily be misrepresentative of the work itself. The taxonomy applied today by governmental bodies, such as the one that regulates scientific production in Portugal, is not enough, is limiting, and promotes classification in areas close to the desired, therefore with great potential for error. An automatic classification process based on machine learning algorithms presents itself as a possible solution to the subjectivity problem in classification, and while it does not solve the issue of taxonomy mismatch this work shows this possibility with proved results. In this work, we propose a classification taxonomy, as well as we develop a process based on machine learning algorithms to solve the classification problem. We also present a set of directions for future work for an increasingly representative classification of evolution in science, which is not intended as airtight, but flexible and perhaps increasingly based on phenomena and not just disciplines.A evolução na produção de ciência, associada à crescente colaboração interdomínios do conhecimento e à também crescente coautoria de trabalhos permanece suportada por processos de classificação manual, subjetiva e sujeita a interpretações erradas. A própria taxonomia na qual assenta esse mesmo processo de classificação não é consensual, com organismos estatais a recorrerem a taxonomias que não acompanham as alterações nas áreas científicas, e indexadores/repositórios que procuram acompanhar essas mesmas alterações. Verificamos uma realidade distinta do espectável e que os domínios onde são registados os trabalhos científicos podem facilmente estar desenquadrados. A taxonomia hoje aplicada pelos organismos governamentais, como o caso do organismo que regulamenta a produção científica em Portugal, não é suficiente, é limitadora, e promove a classificação em domínios aproximados do desejado, logo com grande potencial para erro. Um processo de classificação automática com base em algoritmos de machine learning apresenta-se como uma possível solução para o problema da subjetividade na classificação, e embora não resolva a questão do desenquadramento da taxonomia utilizada, é apresentada neste trabalho como uma possibilidade comprovada. Neste trabalho propomos uma taxonomia de classificação, bem como nós desenvolvemos um processo baseado em machine learning algoritmos para resolver o problema de classificação. Apresentamos ainda um conjunto de direções para trabalhos futuros para uma classificação cada vez mais representativa da evolução nas ciências, que não pretende ser hermética, mas flexível e talvez cada vez mais baseada em fenómenos e não apenas em disciplinas
    corecore