206 research outputs found

    A survey on online active learning

    Full text link
    Online active learning is a paradigm in machine learning that aims to select the most informative data points to label from a data stream. The problem of minimizing the cost associated with collecting labeled observations has gained a lot of attention in recent years, particularly in real-world applications where data is only available in an unlabeled form. Annotating each observation can be time-consuming and costly, making it difficult to obtain large amounts of labeled data. To overcome this issue, many active learning strategies have been proposed in the last decades, aiming to select the most informative observations for labeling in order to improve the performance of machine learning models. These approaches can be broadly divided into two categories: static pool-based and stream-based active learning. Pool-based active learning involves selecting a subset of observations from a closed pool of unlabeled data, and it has been the focus of many surveys and literature reviews. However, the growing availability of data streams has led to an increase in the number of approaches that focus on online active learning, which involves continuously selecting and labeling observations as they arrive in a stream. This work aims to provide an overview of the most recently proposed approaches for selecting the most informative observations from data streams in the context of online active learning. We review the various techniques that have been proposed and discuss their strengths and limitations, as well as the challenges and opportunities that exist in this area of research. Our review aims to provide a comprehensive and up-to-date overview of the field and to highlight directions for future work

    Development of a Novel Dataset and Tools for Non-Invasive Fetal Electrocardiography Research

    Get PDF
    This PhD thesis presents the development of a novel open multi-modal dataset for advanced studies on fetal cardiological assessment, along with a set of signal processing tools for its exploitation. The Non-Invasive Fetal Electrocardiography (ECG) Analysis (NInFEA) dataset features multi-channel electrophysiological recordings characterized by high sampling frequency and digital resolution, maternal respiration signal, synchronized fetal trans-abdominal pulsed-wave Doppler (PWD) recordings and clinical annotations provided by expert clinicians at the time of the signal collection. To the best of our knowledge, there are no similar dataset available. The signal processing tools targeted both the PWD and the non-invasive fetal ECG, exploiting the recorded dataset. About the former, the study focuses on the processing aimed at the preparation of the signal for the automatic measurement of relevant morphological features, already adopted in the clinical practice for cardiac assessment. To this aim, a relevant step is the automatic identification of the complete and measurable cardiac cycles in the PWD videos: a rigorous methodology was deployed for the analysis of the different processing steps involved in the automatic delineation of the PWD envelope, then implementing different approaches for the supervised classification of the cardiac cycles, discriminating between complete and measurable vs. malformed or incomplete ones. Finally, preliminary measurement algorithms were also developed in order to extract clinically relevant parameters from the PWD. About the fetal ECG, this thesis concentrated on the systematic analysis of the adaptive filters performance for non-invasive fetal ECG extraction processing, identified as the reference tool throughout the thesis. Then, two studies are reported: one on the wavelet-based denoising of the extracted fetal ECG and another one on the fetal ECG quality assessment from the analysis of the raw abdominal recordings. Overall, the thesis represents an important milestone in the field, by promoting the open-data approach and introducing automated analysis tools that could be easily integrated in future medical devices

    Multi-modal and multi-model interrogation of large-scale functional brain networks

    Get PDF
    Existing whole-brain models are generally tailored to the modelling of a particular data modality (e.g., fMRI or MEG/EEG). We propose that despite the differing aspects of neural activity each modality captures, they originate from shared network dynamics. Building on the universal principles of self-organising delay-coupled nonlinear systems, we aim to link distinct features of brain activity - captured across modalities - to the dynamics unfolding on a macroscopic structural connectome. To jointly predict connectivity, spatiotemporal and transient features of distinct signal modalities, we consider two large-scale models - the Stuart Landau and Wilson and Cowan models - which generate short-lived 40 Hz oscillations with varying levels of realism. To this end, we measure features of functional connectivity and metastable oscillatory modes (MOMs) in fMRI and MEG signals - and compare them against simulated data. We show that both models can represent MEG functional connectivity (FC), functional connectivity dynamics (FCD) and generate MOMs to a comparable degree. This is achieved by adjusting the global coupling and mean conduction time delay and, in the WC model, through the inclusion of balance between excitation and inhibition. For both models, the omission of delays dramatically decreased the performance. For fMRI, the SL model performed worse for FCD and MOMs, highlighting the importance of balanced dynamics for the emergence of spatiotemporal and transient patterns of ultra-slow dynamics. Notably, optimal working points varied across modalities and no model was able to achieve a correlation with empirical FC higher than 0.4 across modalities for the same set of parameters. Nonetheless, both displayed the emergence of FC patterns that extended beyond the constraints of the anatomical structure. Finally, we show that both models can generate MOMs with empirical-like properties such as size (number of brain regions engaging in a mode) and duration (continuous time interval during which a mode appears). Our results demonstrate the emergence of static and dynamic properties of neural activity at different timescales from networks of delay-coupled oscillators at 40 Hz. Given the higher dependence of simulated FC on the underlying structural connectivity, we suggest that mesoscale heterogeneities in neural circuitry may be critical for the emergence of parallel cross-modal functional networks and should be accounted for in future modelling endeavours

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research

    The blessings of explainable AI in operations & maintenance of wind turbines

    Get PDF
    Wind turbines play an integral role in generating clean energy, but regularly suffer from operational inconsistencies and failures leading to unexpected downtimes and significant Operations & Maintenance (O&M) costs. Condition-Based Monitoring (CBM) has been utilised in the past to monitor operational inconsistencies in turbines by applying signal processing techniques to vibration data. The last decade has witnessed growing interest in leveraging Supervisory Control & Acquisition (SCADA) data from turbine sensors towards CBM. Machine Learning (ML) techniques have been utilised to predict incipient faults in turbines and forecast vital operational parameters with high accuracy by leveraging SCADA data and alarm logs. More recently, Deep Learning (DL) methods have outperformed conventional ML techniques, particularly for anomaly prediction. Despite demonstrating immense promise in transitioning to Artificial Intelligence (AI), such models are generally black-boxes that cannot provide rationales behind their predictions, hampering the ability of turbine operators to rely on automated decision making. We aim to help combat this challenge by providing a novel perspective on Explainable AI (XAI) for trustworthy decision support.This thesis revolves around three key strands of XAI – DL, Natural Language Generation (NLG) and Knowledge Graphs (KGs), which are investigated by utilising data from an operational turbine. We leverage DL and NLG to predict incipient faults and alarm events in the turbine in natural language as well as generate human-intelligible O&M strategies to assist engineers in fixing/averting the faults. We also propose specialised DL models which can predict causal relationships in SCADA features as well as quantify the importance of vital parameters leading to failures. The thesis finally culminates with an interactive Question- Answering (QA) system for automated reasoning that leverages multimodal domain-specific information from a KG, facilitating engineers to retrieve O&M strategies with natural language questions. By helping make turbines more reliable, we envisage wider adoption of wind energy sources towards tackling climate change

    Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty

    Full text link

    Fingerprint-based biometric recognition allied to fuzzy-neural feature classification.

    Get PDF
    The research investigates fingerprint recognition as one of the most reliable biometrics identification methods. An automatic identification process of humans-based on fingerprints requires the input fingerprint to be matched with a large number of fingerprints in a database. To reduce the search time and computational complexity, it is desirable to classify the database of fingerprints into an accurate and consistent manner so that the input fingerprint is matched only with a subset of the fingerprints in the database. In this regard, the research addressed fingerprint classification. The goal is to improve the accuracy and speed up of existing automatic fingerprint identification algorithms. The investigation is based on analysis of fingerprint characteristics and feature classification using neural network and fuzzy-neural classifiers.The methodology developed, is comprised of image processing, computation of a directional field image, singular-point detection, and feature vector encoding. The statistical distribution of feature vectors was analysed using SPSS. Three types of classifiers, namely, multi-layered perceptrons, radial basis function and fuzzy-neural methods were implemented. The developed classification systems were tested and evaluated on 4,000 fingerprint images on the NIST-4 database. For the five-class problem, classification accuracy of 96.2% for FNN, 96.07% for MLP and 84.54% for RBF was achieved, without any rejection. FNN and MLP classification results are significant in comparison with existing studies, which have been reviewed
    • …
    corecore