7 research outputs found

    Machine learning Ensemble Modelling to classify caesarean section and vaginal delivery types using cardiotocography traces

    Get PDF
    Human visual inspection of Cardiotocography traces is used to monitor the foetus during labour and avoid neonatal mortality and morbidity. The problem, however, is that visual interpretation of Cardiotocography traces is subject to high inter and intra observer variability. Incorrect decisions, caused by miss-interpretation, can lead to adverse perinatal outcomes and in severe cases death. This study presents a review of human Cardiotocography trace interpretation and argues that machine learning, used as a decision support system by obstetricians and midwives, may provide an objective measure alongside normal practices. This will help to increase predictive capacity and reduce negative outcomes. A robust methodology is presented for feature set engineering using an open database comprising 552 intrapartum recordings. State-of-the-art in signal processing techniques is applied to raw Cardiotocography foetal heart rate traces to extract 13 features. Those with low discriminative capacity are removed using Recursive Feature Elimination. The dataset is imbalanced with significant differences between the prior probabilities of both normal deliveries and those delivered by caesarean section. This issue is addressed by oversampling the training instances using a synthetic minority oversampling technique to provide a balanced class distribution. Several simple, yet powerful, machine-learning algorithms are trained, using the feature set, and their performance is evaluated with real test data. The results are encouraging using an ensemble classifier comprising Fishers Linear Discriminant Analysis, Random Forest and Support Vector Machine classifiers, with 87% (95% Confidence Interval: 86%, 88%) for Sensitivity, 90% (95% CI: 89%, 91%) for Specificity, and 96% (95% CI: 96%, 97%) for the Area Under the Curve, with a 9% (95% CI: 9%, 10%) Mean Square Error

    Machine Learning to Classify Cardiotocography for Fetal Hypoxia Detection

    Get PDF
    Fetal hypoxia can cause damaging consequences on babies' such as stillbirth and cerebral palsy. Cardiotocography (CTG) has been used to detect intrapartum fetal hypoxia during labor. It is a non-invasive machine that measures the fetal heart rate and uterine contractions. Visual CTG suffers inconsistencies in interpretations among clinicians that can delay interventions. Machine learning (ML) showed potential in classifying abnormal CTG, allowing automatic interpretation. In the absence of a gold standard, researchers used various surrogate biomarkers to classify CTG, where some were clinically irrelevant. We proposed using Apgar scores as the surrogate benchmark of babies' ability to recover from birth. Apgar scores measure newborns' ability to recover from active uterine contraction, which measures appearance, pulse, grimace, activity and respiration. The higher the Apgar score, the healthier the baby is.We employ signal processing methods to pre-process and extract validated features of 552 raw CTG. We also included CTG-specific characteristics as outlined in the NICE guidelines. We employed ML techniques using 22 features and measured performances between ML classifiers. While we found that ML can distinguish CTG with low Apgar scores, results for the lowest Apgar scores, which are rare in the dataset we used, would benefit from more CTG data for better performance. We need an external dataset to validate our model for generalizability to ensure that it does not overfit a specific population.Clinical Relevance- This study demonstrated the potential of using a clinically relevant benchmark for classifying CTG to allow automatic early detection of hypoxia to reduce decision-making time in maternity units.</p

    A Strategy for Classification of “Vaginal vs. Cesarean Section” Delivery: Bivariate Empirical Mode Decomposition of Cardiotocographic Recordings

    Get PDF
    We propose objective and robust measures for the purpose of classification of “vaginal vs. cesarean section” delivery by investigating temporal dynamics and complex interactions between fetal heart rate (FHR) and maternal uterine contraction (UC) recordings from cardiotocographic (CTG) traces. Multivariate extension of empirical mode decomposition (EMD) yields intrinsic scales embedded in UC-FHR recordings while also retaining inter-channel (UC-FHR) coupling at multiple scales. The mode alignment property of EMD results in the matched signal decomposition, in terms of frequency content, which paves the way for the selection of robust and objective time-frequency features for the problem at hand. Specifically, instantaneous amplitude and instantaneous frequency of multivariate intrinsic mode functions are utilized to construct a class of features which capture nonlinear and nonstationary interactions from UC-FHR recordings. The proposed features are fed to a variety of modern machine learning classifiers (decision tree, support vector machine, AdaBoost) to delineate vaginal and cesarean dynamics. We evaluate the performance of different classifiers on a real world dataset by investigating the following classifying measures: sensitivity, specificity, area under the ROC curve (AUC) and mean squared error (MSE). It is observed that under the application of all proposed 40 features AdaBoost classifier provides the best accuracy of 91.8% sensitivity, 95.5% specificity, 98% AUC, and 5% MSE. To conclude, the utilization of all proposed time-frequency features as input to machine learning classifiers can benefit clinical obstetric practitioners through a robust and automatic approach for the classification of fetus dynamics

    A Comprehensive Review of Techniques for Processing and Analyzing Fetal Heart Rate Signals

    Get PDF
    The availability of standardized guidelines regarding the use of electronic fetal monitoring (EFM) in clinical practice has not effectively helped to solve the main drawbacks of fetal heart rate (FHR) surveillance methodology, which still presents inter- and intra-observer variability as well as uncertainty in the classification of unreassuring or risky FHR recordings. Given the clinical relevance of the interpretation of FHR traces as well as the role of FHR as a marker of fetal wellbeing autonomous nervous system development, many different approaches for computerized processing and analysis of FHR patterns have been proposed in the literature. The objective of this review is to describe the techniques, methodologies, and algorithms proposed in this field so far, reporting their main achievements and discussing the value they brought to the scientific and clinical community. The review explores the following two main approaches to the processing and analysis of FHR signals: traditional (or linear) methodologies, namely, time and frequency domain analysis, and less conventional (or nonlinear) techniques. In this scenario, the emerging role and the opportunities offered by Artificial Intelligence tools, representing the future direction of EFM, are also discussed with a specific focus on the use of Artificial Neural Networks, whose application to the analysis of accelerations in FHR signals is also examined in a case study conducted by the authors

    Continuous Glucose Monitoring for the diagnosis of Gestational Diabetes Mellitus.

    Full text link
    Gestational Diabetes Mellitus (GDM) incidence and negative outcomes are increasing worldwide. The validity of the oral glucose tolerance test (OGTT) for GDM diagnosis remains contested. Continuous Glucose Monitoring (CGM) could represent a more acceptable and replicable test. Aim of this project was to assess CGM for GDM diagnosis. This PhD thesis is based on five projects: a systematic review of the diagnostic indicators of GDM, an online questionnaire to recruit women at high and low risk of GDM, a retrospective cohort study on the use of the Medtronic iPro2 CGM device for GDM diagnosis, a prospective cohort study on the use of the Abbott Freestyle Libre PRO 2 CGM and a survey study on women and healthcare providers perception of both methods. CGM data were analysed as distribution parameters (mean, CV, SD, maximum value), variability parameters (MAGE and MODD) and time spent in the recommended range, then combined in a CGM score of Variability (CGMSV). In the systematic review were included 174 full-text articles on blood, ultrasound, post-natal and amniotic fluid biomarkers. The ultrasound gestational diabetic score (UGDS) was the most promising biomarker for triangulation. In the GDM risk questionnaire (n=45), triangulation of a composite risk factors score (RFS) with CGMSV and OGTT results highlighted six possible OGTT misdiagnoses (discordant with RFS and CGMSV). In the Medtronic pilot Study (n=73), GDM women (n=33) had significantly higher RFS and CGMSV. The triangulation analysis (n=60) suggested 12 probable misdiagnoses. In the Abbott pilot study (n=87), no significant demographic nor CGM data difference was found between NGT and GDM, possibly due to the small GDM sample size (n=13). With triangulation, 11 OGTT results were potentially false. UGDS (n=22) was positive in only one woman, considered a true negative otherwise. In the survey study, women reported significantly higher acceptability of CGM versus OGTT (n=70 and n=60, respectively), and 94% would recommend CGM for GDM diagnosis. HCP (n=30) scored CGM acceptability significantly lower than women and expressed doubts about the correlation between CGM data and perinatal outcomes. CGM represents a more acceptable alternative to OGTT for GDM diagnosis. HCP expressed doubt about CGM accuracy, and issues of establishing superiority to OGTT remain. Further research on larger cohorts of patients with additional triangulation elements is needed to confirm CGM acceptability and accuracy and refine its use

    Performance Evaluation of Smart Decision Support Systems on Healthcare

    Get PDF
    Medical activity requires responsibility not only from clinical knowledge and skill but also on the management of an enormous amount of information related to patient care. It is through proper treatment of information that experts can consistently build a healthy wellness policy. The primary objective for the development of decision support systems (DSSs) is to provide information to specialists when and where they are needed. These systems provide information, models, and data manipulation tools to help experts make better decisions in a variety of situations. Most of the challenges that smart DSSs face come from the great difficulty of dealing with large volumes of information, which is continuously generated by the most diverse types of devices and equipment, requiring high computational resources. This situation makes this type of system susceptible to not recovering information quickly for the decision making. As a result of this adversity, the information quality and the provision of an infrastructure capable of promoting the integration and articulation among different health information systems (HIS) become promising research topics in the field of electronic health (e-health) and that, for this same reason, are addressed in this research. The work described in this thesis is motivated by the need to propose novel approaches to deal with problems inherent to the acquisition, cleaning, integration, and aggregation of data obtained from different sources in e-health environments, as well as their analysis. To ensure the success of data integration and analysis in e-health environments, it is essential that machine-learning (ML) algorithms ensure system reliability. However, in this type of environment, it is not possible to guarantee a reliable scenario. This scenario makes intelligent SAD susceptible to predictive failures, which severely compromise overall system performance. On the other hand, systems can have their performance compromised due to the overload of information they can support. To solve some of these problems, this thesis presents several proposals and studies on the impact of ML algorithms in the monitoring and management of hypertensive disorders related to pregnancy of risk. The primary goals of the proposals presented in this thesis are to improve the overall performance of health information systems. In particular, ML-based methods are exploited to improve the prediction accuracy and optimize the use of monitoring device resources. It was demonstrated that the use of this type of strategy and methodology contributes to a significant increase in the performance of smart DSSs, not only concerning precision but also in the computational cost reduction used in the classification process. The observed results seek to contribute to the advance of state of the art in methods and strategies based on AI that aim to surpass some challenges that emerge from the integration and performance of the smart DSSs. With the use of algorithms based on AI, it is possible to quickly and automatically analyze a larger volume of complex data and focus on more accurate results, providing high-value predictions for a better decision making in real time and without human intervention.A atividade médica requer responsabilidade não apenas com base no conhecimento e na habilidade clínica, mas também na gestão de uma enorme quantidade de informações relacionadas ao atendimento ao paciente. É através do tratamento adequado das informações que os especialistas podem consistentemente construir uma política saudável de bem-estar. O principal objetivo para o desenvolvimento de sistemas de apoio à decisão (SAD) é fornecer informações aos especialistas onde e quando são necessárias. Esses sistemas fornecem informações, modelos e ferramentas de manipulação de dados para ajudar os especialistas a tomar melhores decisões em diversas situações. A maioria dos desafios que os SAD inteligentes enfrentam advêm da grande dificuldade de lidar com grandes volumes de dados, que é gerada constantemente pelos mais diversos tipos de dispositivos e equipamentos, exigindo elevados recursos computacionais. Essa situação torna este tipo de sistemas suscetível a não recuperar a informação rapidamente para a tomada de decisão. Como resultado dessa adversidade, a qualidade da informação e a provisão de uma infraestrutura capaz de promover a integração e a articulação entre diferentes sistemas de informação em saúde (SIS) tornam-se promissores tópicos de pesquisa no campo da saúde eletrônica (e-saúde) e que, por essa mesma razão, são abordadas nesta investigação. O trabalho descrito nesta tese é motivado pela necessidade de propor novas abordagens para lidar com os problemas inerentes à aquisição, limpeza, integração e agregação de dados obtidos de diferentes fontes em ambientes de e-saúde, bem como sua análise. Para garantir o sucesso da integração e análise de dados em ambientes e-saúde é importante que os algoritmos baseados em aprendizagem de máquina (AM) garantam a confiabilidade do sistema. No entanto, neste tipo de ambiente, não é possível garantir um cenário totalmente confiável. Esse cenário torna os SAD inteligentes suscetíveis à presença de falhas de predição que comprometem seriamente o desempenho geral do sistema. Por outro lado, os sistemas podem ter seu desempenho comprometido devido à sobrecarga de informações que podem suportar. Para tentar resolver alguns destes problemas, esta tese apresenta várias propostas e estudos sobre o impacto de algoritmos de AM na monitoria e gestão de transtornos hipertensivos relacionados com a gravidez (gestação) de risco. O objetivo das propostas apresentadas nesta tese é melhorar o desempenho global de sistemas de informação em saúde. Em particular, os métodos baseados em AM são explorados para melhorar a precisão da predição e otimizar o uso dos recursos dos dispositivos de monitorização. Ficou demonstrado que o uso deste tipo de estratégia e metodologia contribui para um aumento significativo do desempenho dos SAD inteligentes, não só em termos de precisão, mas também na diminuição do custo computacional utilizado no processo de classificação. Os resultados observados buscam contribuir para o avanço do estado da arte em métodos e estratégias baseadas em inteligência artificial que visam ultrapassar alguns desafios que advêm da integração e desempenho dos SAD inteligentes. Como o uso de algoritmos baseados em inteligência artificial é possível analisar de forma rápida e automática um volume maior de dados complexos e focar em resultados mais precisos, fornecendo previsões de alto valor para uma melhor tomada de decisão em tempo real e sem intervenção humana
    corecore