80 research outputs found
Feature extraction from ear-worn sensor data for gait analysis
Gait analysis has a significant role in assessing human's walking pattern. It is generally used in sports science for understanding body mechanics, and it is also used to monitor patients' neuro-disorder related gait abnormalities. Traditional marker-based systems are well known for tracking gait parameters for gait analysis, however, it requires long set up time therefore very difficult to be applied in everyday realtime monitoring. Nowadays, there is ever growing of interest in developing portable devices and their supporting software with novel algorithms for gait pattern analysis. The aim of this research is to investigate the possibilities of novel gait pattern detection algorithms for accelerometer-based sensors. In particular, we have used e-AR sensor, an ear-worn sensor which registers body motion via its embedded 3-D accelerom-eter. Gait data was given semantic annotation using pressure mat as well as real-time video recording. Important time stamps within a gait cycle, which are essential for extracting meaningful gait parameters, were identified. Furthermore, advanced signal processing algorithm was applied to perform automatic feature extraction by signal decomposition and reconstruction. Analysis on real-word data has demonstrated the potential for an accelerometer-based sensor system and its ability to extract of meaningful gait parameters
Electrocardiography monitoring system and method
Systems and methods for electrocardiography monitoring use multiple capacitive sensors in order to determine reliable measurements of electrophysiological information of a patient. Relative coupling strength and/or reliability is used to select dynamically which sensors to use in order to determine, in particular, an electrocardiogram of the patient.</p
Machine Learning for Benchmarking Critical Care Outcomes
Objectives Enhancing critical care efficacy involves evaluating and improving system functioning. Benchmarking, a retrospective comparison of results against standards, aids risk-adjusted assessment and helps healthcare providers identify areas for improvement based on observed and predicted outcomes. The last two decades have seen the development of several models using machine learning (ML) for clinical outcome prediction. ML is a field of artificial intelligence focused on creating algorithms that enable computers to learn from and make predictions or decisions based on data. This narrative review centers on key discoveries and outcomes to aid clinicians and researchers in selecting the optimal methodology for critical care benchmarking using ML. Methods We used PubMed to search the literature from 2003 to 2023 regarding predictive models utilizing ML for mortality (592 articles), length of stay (143 articles), or mechanical ventilation (195 articles). We supplemented the PubMed search with Google Scholar, making sure relevant articles were included. Given the narrative style, papers in the cohort were manually curated for a comprehensive reader perspective. Results Our report presents comparative results for benchmarked outcomes and emphasizes advancements in feature types, preprocessing, model selection, and validation. It showcases instances where ML effectively tackled critical care outcome-prediction challenges, including nonlinear relationships, class imbalances, missing data, and documentation variability, leading to enhanced results. Conclusions Although ML has provided novel tools to improve the benchmarking of critical care outcomes, areas that require further research include class imbalance, fairness, improved calibration, generalizability, and long-term validation of published models
Implementation of an automated early warning scoring system in a surgical ward:practical use and effects on patient outcomes
Introduction Early warning scores (EWS) are being increasingly embedded in hospitals over the world due to their promise to reduce adverse events and improve the outcomes of clinical patients. The aim of this study was to evaluate the clinical use of an automated modified EWS (MEWS) for patients after surgery. Methods This study conducted retrospective before-and-after comparative analysis of non-automated and automated MEWS for patients admitted to the surgical high-dependency unit in a tertiary hospital. Operational outcomes included number of recorded assessments of the individual MEWS elements, number of complete MEWS assessments, as well as adherence rate to related protocols. Clinical outcomes included hospital length of stay, in-hospital and 28-day mortality, and ICU readmission rate. Results Recordings in the electronic medical record from the control period contained 7929 assessments of MEWS elements and were performed in 320 patients. Recordings from the intervention period contained 8781 assessments of MEWS elements in 273 patients, of which 3418 were performed with the automated EWS system. During the control period, 199 (2.5%) complete MEWS were recorded versus 3991 (45.5%) during intervention period. With the automated MEWS systems, the percentage of missing assessments and the time until the next assessment for patients with a MEWS of 2 decreased significantly. The protocol adherence improved from 1.1% during the control period to 25.4% when the automated MEWS system was involved. There were no significant differences in clinical outcomes. Conclusion Implementation of an automated EWS system on a surgical high dependency unit improves the number of complete MEWS assessments, registered vital signs, and adherence to the EWS hospital protocol. However, this positive effect did not translate into a significant decrease in mortality, hospital length of stay, or ICU readmissions. Future research and development on automated EWS systems should focus on data management and technology interoperability.</p
GenHPF: General Healthcare Predictive Framework with Multi-task Multi-source Learning
Despite the remarkable progress in the development of predictive models for
healthcare, applying these algorithms on a large scale has been challenging.
Algorithms trained on a particular task, based on specific data formats
available in a set of medical records, tend to not generalize well to other
tasks or databases in which the data fields may differ. To address this
challenge, we propose General Healthcare Predictive Framework (GenHPF), which
is applicable to any EHR with minimal preprocessing for multiple prediction
tasks. GenHPF resolves heterogeneity in medical codes and schemas by converting
EHRs into a hierarchical textual representation while incorporating as many
features as possible. To evaluate the efficacy of GenHPF, we conduct multi-task
learning experiments with single-source and multi-source settings, on three
publicly available EHR datasets with different schemas for 12 clinically
meaningful prediction tasks. Our framework significantly outperforms baseline
models that utilize domain knowledge in multi-source learning, improving
average AUROC by 1.2%P in pooled learning and 2.6%P in transfer learning while
also showing comparable results when trained on a single EHR dataset.
Furthermore, we demonstrate that self-supervised pretraining using multi-source
datasets is effective when combined with GenHPF, resulting in a 0.6%P AUROC
improvement compared to models without pretraining. By eliminating the need for
preprocessing and feature engineering, we believe that this work offers a solid
framework for multi-task and multi-source learning that can be leveraged to
speed up the scaling and usage of predictive algorithms in healthcare.Comment: Accepted by IEEE Journal of Biomedical and Health Informatic
Neoadjuvant Chemoimmunotherapy for NSCLC: A Systematic Review and Meta-Analysis
IMPORTANCE: To date, no meta-analyses have comprehensively assessed the association of neoadjuvant chemoimmunotherapy with clinical outcomes in non-small cell lung cancer (NSCLC) in randomized and nonrandomized settings. In addition, there exists controversy concerning the efficacy of neoadjuvant chemoimmunotherapy for patients with NSCLC with programmed cell death 1 ligand 1 (PD-L1) levels less than 1%.
OBJECTIVE: To compare neoadjuvant chemoimmunotherapy with chemotherapy by adverse events and surgical, pathological, and efficacy outcomes using recently published randomized clinical trials and nonrandomized trials.
DATA SOURCES: MEDLINE and Embase were systematically searched from January 1, 2013, to October 25, 2023, for all clinical trials of neoadjuvant chemoimmunotherapy and chemotherapy that included at least 10 patients.
STUDY SELECTION: Observational studies and trials reporting the use of neoadjuvant radiotherapy, including chemoradiotherapy, molecular targeted therapy, or immunotherapy monotherapy, were excluded.
MAIN OUTCOMES AND MEASURES: Surgical, pathological, and efficacy end points and adverse events were pooled using a random-effects meta-analysis.
RESULTS: Among 43 eligible trials comprising 5431 patients (4020 males [74.0%]; median age range, 55-70 years), there were 8 randomized clinical trials with 3387 patients. For randomized clinical trials, pooled overall survival (hazard ratio, 0.65; 95% CI, 0.54-0.79; I2 = 0%), event-free survival (hazard ratio, 0.59; 95% CI, 0.52-0.67; I2 = 14.9%), major pathological response (risk ratio, 3.42; 95% CI, 2.83-4.15; I2 = 31.2%), and complete pathological response (risk ratio, 5.52; 95% CI, 4.25-7.15; I2 = 27.4%) favored neoadjuvant chemoimmunotherapy over neoadjuvant chemotherapy. For patients with baseline tumor PD-L1 levels less than 1%, there was a significant benefit in event-free survival for neoadjuvant chemoimmunotherapy compared with chemotherapy (hazard ratio, 0.74; 95% CI, 0.62-0.89; I2 = 0%).
CONCLUSION AND RELEVANCE: This study found that neoadjuvant chemoimmunotherapy was superior to neoadjuvant chemotherapy across surgical, pathological, and efficacy outcomes. These findings suggest that patients with resectable NSCLC with tumor PD-L1 levels less than 1% may have an event-free survival benefit with neoadjuvant chemoimmunotherapy
Advocacy at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery
The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery (WCPCCS) will be held in Washington DC, USA, from Saturday, 26 August, 2023 to Friday, 1 September, 2023, inclusive. The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery will be the largest and most comprehensive scientific meeting dedicated to paediatric and congenital cardiac care ever held. At the time of the writing of this manuscript, The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery has 5,037 registered attendees (and rising) from 117 countries, a truly diverse and international faculty of over 925 individuals from 89 countries, over 2,000 individual abstracts and poster presenters from 101 countries, and a Best Abstract Competition featuring 153 oral abstracts from 34 countries. For information about the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery, please visit the following website: [www.WCPCCS2023.org]. The purpose of this manuscript is to review the activities related to global health and advocacy that will occur at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery. Acknowledging the need for urgent change, we wanted to take the opportunity to bring a common voice to the global community and issue the Washington DC WCPCCS Call to Action on Addressing the Global Burden of Pediatric and Congenital Heart Diseases. A copy of this Washington DC WCPCCS Call to Action is provided in the Appendix of this manuscript. This Washington DC WCPCCS Call to Action is an initiative aimed at increasing awareness of the global burden, promoting the development of sustainable care systems, and improving access to high quality and equitable healthcare for children with heart disease as well as adults with congenital heart disease worldwide
Learning from sonar data for the classification of underwater seabeds
The increased use of sonar surveys for both industrial and leisure activities has motivated the research for cost effective, automated processed for seabed classification. Seabed classification is essential for many fields including dredging, environmental studies, fisheries research, pipeline and cable route surveys, marine archaeology and automated underwater vehicles. The advancement in both sonar technology and sonar data storage has led to large quantities of sonar data being collected per survey. The challenge, however, is to derive relevant features that can summarise these large amounts of data and provide discrimination between several seabed types present in each survey.The main aim of this work is to classify sidescan bathymetric datasets. However, in most sidescan bathymetric surveys, only a few ground-truthed areas (if any) are available. Since sidescan ‘ground-truthed’ areas were also provided for this work, they were used to test feature extraction, selection and classification algorithms. Backscattering amplitude, after using bathymetric data to correct for variations, did not provide enough discrimination between sediment classes in this work which lead to the investigation of other features. The variation of backscattering amplitude at different scales corresponds to variations in both micro bathymetry and large scale bathymetry. A method that can derive multiscale features from signals was needed, and the wavelet method proved to be an efficient method of doing so. Wavelets are used for feature extraction in 1D sidescan bathymetry survey data and both the feature selection and classification stages are automated. The method is tested on areas of known types and in general, the features show good correlation with sediment types in both types of survey.The main disadvantage of this method, however, is that signal futures are calculated per swathe (or received signal). Thus, sediment boundaries within the same swathe are not detected. To solve this problem, information present in consecutive pings of data can be used, leading to 2-D feature extraction.Several textural classification methods are investigated for the segmentation of sidescan sonar images. The method includes 2D wavelets and Gabor filters. Effects of filter orientation filter scale and window size are observed in both cases, and validated on given sonar images.For sidescan bathymetric datasets, a novel method of classification using both sidescan images and depth maps is investigated. Backscattering amplitude and bathymetry images are both used for feature extraction. Features include amplitude-dependent features, textural features and bathymetric variation features. The method makes use of grab samples available in given areas of the survey for training the classifiers. Alternatively, clustering techniques are used to group the data. The results of applying the method on sidescan bathymetric surveys correlate with the grab samples available as well as the user-classified areas.An automatic method for sidescan bathymetric classification offers a cost effective approach to classify large areas of seabed with a fewer number of grab samples. This work sheds light on areas of feature extraction, selection and classification of sonar data.</p
- …