49 research outputs found

    Comparative Analysis of Electrodermal Activity Decomposition Methods in Emotion Detection Using Machine Learning

    Get PDF
    Electrodermal activity (EDA) reflects sympathetic nervous system activity through sweating-related changes in skin conductance. Decomposition analysis is used to deconvolve the EDA into slow and fast varying tonic and phasic activity, respectively. In this study, we used machine learning models to compare the performance of two EDA decomposition algorithms to detect emotions such as amusing, boring, relaxing, and scary. The EDA data considered in this study were obtained from the publicly available Continuously Annotated Signals of Emotion (CASE) dataset. Initially, we pre-processed and deconvolved the EDA data into tonic and phasic components using decomposition methods such as cvxEDA and BayesianEDA. Further, 12 time-domain features were extracted from the phasic component of EDA data. Finally, we applied machine learning algorithms such as logistic regression (LR) and support vector machine (SVM), to evaluate the performance of the decomposition method. Our results imply that the BayesianEDA decomposition method outperforms the cvxEDA

    A protocol for a systematic review of electronic early warning/track-and-trigger systems (EW/TTS) to predict clinical deterioration: Focus on automated features, technologies, and algorithms

    Get PDF
    Background This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD). Methodology This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers. Discussion This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences

    FAIR4Health: Findable, Accessible, Interoperable and Reusable data to foster Health Research

    Get PDF
    Due to the nature of health data, its sharing and reuse for research are limited by ethical, legal and technical barriers. The FAIR4Health project facilitated and promoted the application of FAIR principles in health research data, derived from the publicly funded health research initiatives to make them Findable, Accessible, Interoperable, and Reusable (FAIR). To confirm the feasibility of the FAIR4Health solution, we performed two pathfinder case studies to carry out federated machine learning algorithms on FAIRified datasets from five health research organizations. The case studies demonstrated the potential impact of the developed FAIR4Health solution on health outcomes and social care research. Finally, we promoted the FAIRified data to share and reuse in the European Union Health Research community, defining an effective EU-wide strategy for the use of FAIR principles in health research and preparing the ground for a roadmap for health research institutions. This scientific report presents a general overview of the FAIR4Health solution: from the FAIRification workflow design to translate raw data/metadata to FAIR data/metadata in the health research domain to the FAIR4Health demonstrators' performance.This research was financially supported by the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 824666 (project FAIR4Health). Also, this research has been co-supported by the Carlos III National Institute of Health, through the IMPaCT Data project (code IMP/00019), and through the Platform for Dynamization and Innovation of the Spanish National Health System industrial capacities and their effective transfer to the productive sector (code PT20/00088), both co-funded by European Regional Development Fund (FEDER) ‘A way of making Europe’.Peer reviewe

    Search for Eccentric Black Hole Coalescences during the Third Observing Run of LIGO and Virgo

    Full text link
    Despite the growing number of confident binary black hole coalescences observed through gravitational waves so far, the astrophysical origin of these binaries remains uncertain. Orbital eccentricity is one of the clearest tracers of binary formation channels. Identifying binary eccentricity, however, remains challenging due to the limited availability of gravitational waveforms that include effects of eccentricity. Here, we present observational results for a waveform-independent search sensitive to eccentric black hole coalescences, covering the third observing run (O3) of the LIGO and Virgo detectors. We identified no new high-significance candidates beyond those that were already identified with searches focusing on quasi-circular binaries. We determine the sensitivity of our search to high-mass (total mass M>70M>70 M⊙M_\odot) binaries covering eccentricities up to 0.3 at 15 Hz orbital frequency, and use this to compare model predictions to search results. Assuming all detections are indeed quasi-circular, for our fiducial population model, we place an upper limit for the merger rate density of high-mass binaries with eccentricities 0<e≀0.30 < e \leq 0.3 at 0.330.33 Gpc−3^{-3} yr−1^{-1} at 90\% confidence level.Comment: 24 pages, 5 figure

    Open data from the third observing run of LIGO, Virgo, KAGRA and GEO

    Get PDF
    The global network of gravitational-wave observatories now includes five detectors, namely LIGO Hanford, LIGO Livingston, Virgo, KAGRA, and GEO 600. These detectors collected data during their third observing run, O3, composed of three phases: O3a starting in April of 2019 and lasting six months, O3b starting in November of 2019 and lasting five months, and O3GK starting in April of 2020 and lasting 2 weeks. In this paper we describe these data and various other science products that can be freely accessed through the Gravitational Wave Open Science Center at https://gwosc.org. The main dataset, consisting of the gravitational-wave strain time series that contains the astrophysical signals, is released together with supporting data useful for their analysis and documentation, tutorials, as well as analysis software packages.Comment: 27 pages, 3 figure

    Open data from the third observing run of LIGO, Virgo, KAGRA, and GEO

    Get PDF
    The global network of gravitational-wave observatories now includes five detectors, namely LIGO Hanford, LIGO Livingston, Virgo, KAGRA, and GEO 600. These detectors collected data during their third observing run, O3, composed of three phases: O3a starting in 2019 April and lasting six months, O3b starting in 2019 November and lasting five months, and O3GK starting in 2020 April and lasting two weeks. In this paper we describe these data and various other science products that can be freely accessed through the Gravitational Wave Open Science Center at https://gwosc.org. The main data set, consisting of the gravitational-wave strain time series that contains the astrophysical signals, is released together with supporting data useful for their analysis and documentation, tutorials, as well as analysis software packages

    A data driven learning approach for the assessment of data quality

    No full text
    Background!#!Data quality assessment is important but complex and task dependent. Identifying suitable measurement methods and reference ranges for assessing their results is challenging. Manually inspecting the measurement results and current data driven approaches for learning which results indicate data quality issues have considerable limitations, e.g. to identify task dependent thresholds for measurement results that indicate data quality issues.!##!Objectives!#!To explore the applicability and potential benefits of a data driven approach to learn task dependent knowledge about suitable measurement methods and assessment of their results. Such knowledge could be useful for others to determine whether a local data stock is suitable for a given task.!##!Methods!#!We started by creating artificial data with previously defined data quality issues and applied a set of generic measurement methods on this data (e.g. a method to count the number of values in a certain variable or the mean value of the values). We trained decision trees on exported measurement methods' results and corresponding outcome data (data that indicated the data's suitability for a use case). For evaluation, we derived rules for potential measurement methods and reference values from the decision trees and compared these regarding their coverage of the true data quality issues artificially created in the dataset. Three researchers independently derived these rules. One with knowledge about present data quality issues and two without.!##!Results!#!Our self-trained decision trees were able to indicate rules for 12 of 19 previously defined data quality issues. Learned knowledge about measurement methods and their assessment was complementary to manual interpretation of measurement methods' results.!##!Conclusions!#!Our data driven approach derives sensible knowledge for task dependent data quality assessment and complements other current approaches. Based on labeled measurement methods' results as training data, our approach successfully suggested applicable rules for checking data quality characteristics that determine whether a dataset is suitable for a given task

    Automatic Detection of Atrial Fibrillation in ECG Using Co-Occurrence Patterns of Dynamic Symbol Assignment and Machine Learning

    No full text
    Early detection of atrial fibrillation from electrocardiography (ECG) plays a vital role in the timely prevention and diagnosis of cardiovascular diseases. Various algorithms have been proposed; however, they are lacking in considering varied-length signals, morphological transitions, and abnormalities over long-term recordings. We propose dynamic symbolic assignment (DSA) to differentiate a normal sinus rhythm (SR) from paroxysmal atrial fibrillation (PAF). We use ECG signals and their interbeat (RR) intervals from two public databases namely, AF Prediction Challenge Database (AFPDB) and AF Termination Challenge Database (AFTDB). We transform RR intervals into a symbolic representation and compute co-occurrence matrices. The DSA feature is extracted using varied symbol-length V, word-size W, and applied to five machine learning algorithms for classification. We test five hypotheses: (i) DSA captures the dynamics of the series, (ii) DSA is a reliable technique for various databases, (iii) optimal parameters improve DSA’s performance, (iv) DSA is consistent for variable signal lengths, and (v) DSA supports cross-data analysis. Our method captures the transition patterns of the RR intervals. The DSA feature exhibit a statistically significant difference in SR and PAF conditions (p &lt; 0.005). The DSA feature with W=3 and V=3 yield maximum performance. In terms of F-measure (F), rotation forest and ensemble learning classifier are the most accurate for AFPDB (F = 94.6%) and AFTDB (F = 99.8%). Our method is effective for short-length signals and supports cross-data analysis. The DSA is capable of capturing the dynamics of varied-lengths ECG signals. Particularly, the optimal parameters-based DSA feature and ensemble learning could help to detect PAF in long-term ECG signals. Our method maps time series into a symbolic representation and identifies abnormalities in noisy, varied-length, and pathological ECG signals

    Ultra-performance liquid chromatography determination of related compounds of molindone in drug substances

    No full text
    Effective chromatographic separation was achieved on a phenyl-hexyl stationary phase (50×2.1 mm, 1.9 micron particles) with the economical and straightforward mobile phase combination delivered in isocratic mode at a flow rate of 0.6 mL/min at 254 nm using a ultra-performance liquid chromatography (UPLC) system. In the developed method, the resolution between molindone and its related compounds was more significant than 2.0. Regression analysis shows an r2 value (correlation coefficient) greater than 0.999 for molindone and its associated compounds. This method could detect related compounds of molindone at a level below 0.009% with respect to a test concentration of 500 ”g/mL for a 2.0 ”L injection volume. The method has shown good, consistent recoveries for related compounds (90-110%). The test solution was found to be stable in the diluent for 48 hours. The drug was subjected to stress conditions. The mass balance was found to be close to 99.3%

    A protocol for a systematic review of electronic early warning/track-and-trigger systems (EW/TTS) to predict clinical deterioration: Focus on automated features, technologies, and algorithms

    No full text
    Background This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD). Methodology This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers. Discussion This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences. Trial registration Systematic review registration: PROSPERO CRD42022334988
    corecore