66 research outputs found

    Recent Applications in Graph Theory

    Get PDF
    Graph theory, being a rigorously investigated field of combinatorial mathematics, is adopted by a wide variety of disciplines addressing a plethora of real-world applications. Advances in graph algorithms and software implementations have made graph theory accessible to a larger community of interest. Ever-increasing interest in machine learning and model deployments for network data demands a coherent selection of topics rewarding a fresh, up-to-date summary of the theory and fruitful applications to probe further. This volume is a small yet unique contribution to graph theory applications and modeling with graphs. The subjects discussed include information hiding using graphs, dynamic graph-based systems to model and control cyber-physical systems, graph reconstruction, average distance neighborhood graphs, and pure and mixed-integer linear programming formulations to cluster networks

    Brain-Computer Interface

    Get PDF
    Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Towards developing a reliable medical device for automated epileptic seizure detection in the ICU

    Get PDF
    Abstract. Epilepsy is a prevalent neurological disorder that affects millions of people globally, and its diagnosis typically involves laborious manual inspection of electroencephalography (EEG) data. Automated detection of epileptic seizures in EEG signals could potentially improve diagnostic accuracy and reduce diagnosis time, but there should be special attention to the number of false alarms to reduce unnecessary treatments and costs. This research presents a study on the use of machine learning techniques for EEG seizure detection with the aim of investigating the effectiveness of different algorithms in terms of high sensitivity and low false alarm rates for feature extraction, selection, pre-processing, classification, and post-processing in designing a medical device for detecting seizure activity in EEG data. The current state-of-the-art methods which are validated clinically using large amounts of data are introduced. The study focuses on finding potential machine learning methods, considering KNN, SVM, decision trees and, Random forests, and compares their performance on the task of seizure detection using features introduced in the literature. Also using ensemble methods namely, bootstrapping and majority voting techniques we achieved a sensitivity of 0.80 and FAR/h of 2.10, accuracy of 97.1% and specificity of 98.2%. Overall, the findings of this study can be useful for developing more accurate and efficient algorithms for EEG seizure detection medical device, which can contribute to the early diagnosis and treatment of epilepsy in the intensive care unit for critically ill patients

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Context-dependent fusion with application to landmine detection.

    Get PDF
    Traditional machine learning and pattern recognition systems use a feature descriptor to describe the sensor data and a particular classifier (also called expert or learner ) to determine the true class of a given pattern. However, for complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be viable alternative to using a single classifier. In this thesis we introduce a new Context-Dependent Fusion (CDF) approach, We use this method to fuse multiple algorithms which use different types of features and different classification methods on multiple sensor data. The proposed approach is motivated by the observation that there is no single algorithm that can consistently outperform all other algorithms. In fact, the relative performance of different algorithms can vary significantly depending on several factions such as extracted features, and characteristics of the target class. The CDF method is a local approach that adapts the fusion method to different regions of the feature space. The goal is to take advantages of the strengths of few algorithms in different regions of the feature space without being affected by the weaknesses of the other algorithms and also avoiding the loss of potentially valuable information provided by few weak classifiers by considering their output as well. The proposed fusion has three main interacting components. The first component, called Context Extraction, partitions the composite feature space into groups of similar signatures, or contexts. Then, the second component assigns an aggregation weight to each detector\u27s decision in each context based on its relative performance within the context. The third component combines the multiple decisions, using the learned weights, to make a final decision. For Context Extraction component, a novel algorithm that performs clustering and feature discrimination is used to cluster the composite feature space and identify the relevant features for each cluster. For the fusion component, six different methods were proposed and investigated. The proposed approached were applied to the problem of landmine detection. Detection and removal of landmines is a serious problem affecting civilians and soldiers worldwide. Several detection algorithms on landmine have been proposed. Extensive testing of these methods has shown that the relative performance of different detectors can vary significantly depending on the mine type, geographical site, soil and weather conditions, and burial depth, etc. Therefore, multi-algorithm, and multi-sensor fusion is a critical component in land mine detection. Results on large and diverse real data collections show that the proposed method can identify meaningful and coherent clusters and that different expert algorithms can be identified for the different contexts. Our experiments have also indicated that the context-dependent fusion outperforms all individual detectors and several global fusion methods

    Digital neuromorphic auditory systems

    Get PDF
    This dissertation presents several digital neuromorphic auditory systems. Neuromorphic systems are capable of running in real-time at a smaller computing cost and consume lower power than on widely available general computers. These auditory systems are considered neuromorphic as they are modelled after computational models of the mammalian auditory pathway and are capable of running on digital hardware, or more specifically on a field-programmable gate array (FPGA). The models introduced are categorised into three parts: a cochlear model, an auditory pitch model, and a functional primary auditory cortical (A1) model. The cochlear model is the primary interface of an input sound signal and transmits the 2D time-frequency representation of the sound to the pitch models as well as to the A1 model. In the pitch model, pitch information is extracted from the sound signal in the form of a fundamental frequency. From the A1 model, timbre information in the form of time-frequency envelope information of the sound signal is extracted. Since the computational auditory models mentioned above are required to be implemented on FPGAs that possess fewer computational resources than general-purpose computers, the algorithms in the models are optimised so that they fit on a single FPGA. The optimisation includes using simplified hardware-implementable signal processing algorithms. Computational resource information of each model on FPGA is extracted to understand the minimum computational resources required to run each model. This information includes the quantity of logic modules, register quantity utilised, and power consumption. Similarity comparisons are also made between the output responses of the computational auditory models on software and hardware using pure tones, chirp signals, frequency-modulated signal, moving ripple signals, and musical signals as input. The limitation of the responses of the models to musical signals at multiple intensity levels is also presented along with the use of an automatic gain control algorithm to alleviate such limitations. With real-world musical signals as their inputs, the responses of the models are also tested using classifiers – the response of the auditory pitch model is used for the classification of monophonic musical notes, and the response of the A1 model is used for the classification of musical instruments with their respective monophonic signals. Classification accuracy results are shown for model output responses on both software and hardware. With the hardware implementable auditory pitch model, the classification score stands at 100% accuracy for musical notes from the 4th and 5th octaves containing 24 classes of notes. With the hardware implementation auditory timbre model, the classification score is 92% accuracy for 12 classes musical instruments. Also presented is the difference in memory requirements of the model output responses on both software and hardware – pitch and timbre responses used for the classification exercises use 24 and 2 times less memory space for hardware than software

    Decoding Neural Signals with Computational Models: A Systematic Review of Invasive BMI

    Full text link
    There are significant milestones in modern human's civilization in which mankind stepped into a different level of life with a new spectrum of possibilities and comfort. From fire-lighting technology and wheeled wagons to writing, electricity and the Internet, each one changed our lives dramatically. In this paper, we take a deep look into the invasive Brain Machine Interface (BMI), an ambitious and cutting-edge technology which has the potential to be another important milestone in human civilization. Not only beneficial for patients with severe medical conditions, the invasive BMI technology can significantly impact different technologies and almost every aspect of human's life. We review the biological and engineering concepts that underpin the implementation of BMI applications. There are various essential techniques that are necessary for making invasive BMI applications a reality. We review these through providing an analysis of (i) possible applications of invasive BMI technology, (ii) the methods and devices for detecting and decoding brain signals, as well as (iii) possible options for stimulating signals into human's brain. Finally, we discuss the challenges and opportunities of invasive BMI for further development in the area.Comment: 51 pages, 14 figures, review articl

    Wearable in-ear pulse oximetry: theory and applications

    Get PDF
    Wearable health technology, most commonly in the form of the smart watch, is employed by millions of users worldwide. These devices generally exploit photoplethysmography (PPG), the non-invasive use of light to measure blood volume, in order to track physiological metrics such as pulse and respiration. Moreover, PPG is commonly used in hospitals in the form of pulse oximetry, which measures light absorbance by the blood at different wavelengths of light to estimate blood oxygen levels (SpO2). This thesis aims to demonstrate that despite its widespread usage over many decades, this sensor still possesses a wealth of untapped value. Through a combination of advanced signal processing and harnessing the ear as a location for wearable sensing, this thesis introduces several novel high impact applications of in-ear pulse oximetry and photoplethysmography. The aims of this thesis are accomplished through a three pronged approach: rapid detection of hypoxia, tracking of cognitive workload and fatigue, and detection of respiratory disease. By means of the simultaneous recording of in-ear and finger pulse oximetry at rest and during breath hold tests, it was found that in-ear SpO2 responds on average 12.4 seconds faster than the finger SpO2. This is likely due in part to the ear being in close proximity to the brain, making it a priority for oxygenation and thus making wearable in-ear SpO2 a good proxy for core blood oxygen. Next, the low latency of in-ear SpO2 was further exploited in the novel application of classifying cognitive workload. It was found that in-ear pulse oximetry was able to robustly detect tiny decreases in blood oxygen during increased cognitive workload, likely caused by increased brain metabolism. This thesis demonstrates that in-ear SpO2 can be used to accurately distinguish between different levels of an N-back memory task, representing different levels of mental effort. This concept was further validated through its application to gaming and then extended to the detection of driver related fatigue. It was found that features derived from SpO2 and PPG were predictive of absolute steering wheel angle, which acts as a proxy for fatigue. The strength of in-ear PPG for the monitoring of respiration was investigated with respect to the finger, with the conclusion that in-ear PPG exhibits far stronger respiration induced intensity variations and pulse amplitude variations than the finger. All three respiratory modes were harnessed through multivariate empirical mode decomposition (MEMD) to produce spirometry-like respiratory waveforms from PPG. It was discovered that these PPG derived respiratory waveforms can be used to detect obstruction to breathing, both through a novel apparatus for the simulation of breathing disorders and through the classification of chronic obstructive pulmonary disease (COPD) in the real world. This thesis establishes in-ear pulse oximetry as a wearable technology with the potential for immense societal impact, with applications from the classification of cognitive workload and the prediction of driver fatigue, through to the detection of chronic obstructive pulmonary disease. The experiments and analysis in this thesis conclusively demonstrate that widely used pulse oximetry and photoplethysmography possess a wealth of untapped value, in essence teaching the old PPG sensor new tricks.Open Acces

    Explainable deep learning solutions for the artifacts correction of EEG signals

    Get PDF
    L'attività celebrale può essere acquisita tramite elettroencefalografia (EEG) con degli elettrodi posti sullo scalpo del soggetto. Quando un segnale EEG viene acquisito si formano degli artefatti dovuti a: movimenti dei muscoli, movimenti degli occhi, attività del cuore o dovuti all'apparecchio di acquisizione stesso. Questi artefatti possono notevolmente compromettere la qualità dei segnali EEG. La rimozione di questi artefatti è fondamentale per molte discipline per ottenere un segnale pulito e poterlo utilizzare nel migli0re dei modi. Il machine learning (ML) è un esempio di tecnica che può essere utilizzata per classificare e rimuovere gli artefatti dai segnali EEG. Il deep learning (DL) è una branca del ML che è sviluppata ispirandosi all'architettura della corteccia cerebrale umana. Il DL è alla base della creazione dell'intelligenza artificiale e della costruzione di reti neurali (NN). Nella tesi applicheremo ICLabel che è una rete neurale che classifica le componenti indipendenti (IC), ottenute con la scomposizione tramite independent component analysis (ICA), in sette classi differenti: brain, eye, muscle, heart, channel noise, line noise e other. ICLabel calcola la probabilità che le ICs appartengano a ciascuna di queste sette classi. Durante questo lavoro di tesi abbiamo sviluppato una semplice rete neurale, simile a quella di ICLabel, che classifica le ICs in due classi: una contenente le ICs che corrispondono a quelli che sono i segnali base dell'attività cerebrale, l'altra invece contenente le ICs che non appartengono a questi segnali base. Abbiamo creato questa rete neurale per poter applicare poi un algoritmo di explainability (basato sulle reti neurali), chiamato GradCAM. Abbiamo, poi, comparato le performances di ICLabel e della rete neurale da noi sviluppata per vedere le differenze dal punto di vista della accuratezza e della precisione nella classificazione, come descritto nel capitolo. Abbiamo infine applicato GradCAM alla rete neurale da noi sviluppata per capire quali parti del segnale la rete usa per compiere le classificazioni, evidenziando sugli spettrogrammi delle ICs le parti più importanti del segnale. Possiamo dire poi, che come ci aspettavamo la CNN è guidata da componenti come quelle del line noise (che corrisponde alla frequenza di 50 Hz e armoniche più alte) per identificare le componenti non brain, mentre si concentra sul range da 1-30 Hz per identificare quelle brain. Anche se promettenti questi risultati vannno investigati. Inoltre GradCAM potrebbe essere applicato anche su ICLabel per spiegare la sua struttura più complessa.The brain electrical activity can be acquired via electroencephalography (EEG) with electrodes placed on the scalp of the individual. When EEG signals are recorded, signal artifacts such as muscular activities, blinking of eyes, and power line electrical noise can significantly affect the quality of the EEG signals. Machine learning (ML) techniques are an example of method used to classify and remove EEG artifacts. Deep learning is a type of ML inspired by the architecture of the cerebral cortex, that is formed by a dense network of neurons, simple processing units in our brain. In this thesis work we use ICLabel that is an artificial neural network developed by EEGLAB to automatically classify, that classifies the inidpendent component(ICs), obtained by the application of the independent component analysis (ICA), in seven classes, i.e., brain, eye, muscle, heart, channel noise, line noise, other. ICLabel provides the probability that each IC features belongs to one out of 6 artefact classes, or it is a pure brain component. We create a simple CNN similar to the ICLabel's one that classifies the EEG artifacts ICs in two classes, brain and not brain. and we added an explainability tool, i.e., GradCAM, to investigate how the algorithm is able to successfully classify the ICs. We compared the performances f our simple CNN versus those of ICLabel, finding that CNN is able to reach satisfactory accuracies (over two classes, i.e., brain/non-brain). Then we applied GradCAM to the CNN to understand what are the most important parts of the spectrogram that the network used to classify the data and we could speculate that, as expected, the CNN is driven by components such as the power line noise (50 Hz and higher harmonics) to identify non-brain components, while it focuses on the range 1-30 Hz to identify brain components. Although promising, these results need further investigations. Moreover, GradCAM could be later applied to ICLabel, too, in order to explain the more sophisticated DL model with 7 classes
    • …
    corecore