29 research outputs found

    Intelligent Biosignal Analysis Methods

    Get PDF
    This book describes recent efforts in improving intelligent systems for automatic biosignal analysis. It focuses on machine learning and deep learning methods used for classification of different organism states and disorders based on biomedical signals such as EEG, ECG, HRV, and others

    Multivariate multiscale complexity analysis

    No full text
    Established dynamical complexity analysis measures operate at a single scale and thus fail to quantify inherent long-range correlations in real world data, a key feature of complex systems. They are designed for scalar time series, however, multivariate observations are common in modern real world scenarios and their simultaneous analysis is a prerequisite for the understanding of the underlying signal generating model. To that end, this thesis first introduces a notion of multivariate sample entropy and thus extends the current univariate complexity analysis to the multivariate case. The proposed multivariate multiscale entropy (MMSE) algorithm is shown to be capable of addressing the dynamical complexity of such data directly in the domain where they reside, and at multiple temporal scales, thus making full use of all the available information, both within and across the multiple data channels. Next, the intrinsic multivariate scales of the input data are generated adaptively via the multivariate empirical mode decomposition (MEMD) algorithm. This allows for both generating comparable scales from multiple data channels, and for temporal scales of same length as the length of input signal, thus, removing the critical limitation on input data length in current complexity analysis methods. The resulting MEMD-enhanced MMSE method is also shown to be suitable for non-stationary multivariate data analysis owing to the data-driven nature of MEMD algorithm, as non-stationarity is the biggest obstacle for meaningful complexity analysis. This thesis presents a quantum step forward in this area, by introducing robust and physically meaningful complexity estimates of real-world systems, which are typically multivariate, finite in duration, and of noisy and heterogeneous natures. This also allows us to gain better understanding of the complexity of the underlying multivariate model and more degrees of freedom and rigor in the analysis. Simulations on both synthetic and real world multivariate data sets support the analysis

    Complexity and Entropy in Physiological Signals (CEPS): Resonance Breathing Rate Assessed Using Measures of Fractal Dimension, Heart Rate Asymmetry and Permutation Entropy

    Get PDF
    Background: As technology becomes more sophisticated, more accessible methods of interpretating Big Data become essential. We have continued to develop Complexity and Entropy in Physiological Signals (CEPS) as an open access MATLABÂź GUI (graphical user interface) providing multiple methods for the modification and analysis of physiological data. Methods: To demonstrate the functionality of the software, data were collected from 44 healthy adults for a study investigating the effects on vagal tone of breathing paced at five different rates, as well as self-paced and un-paced. Five-minute 15-s recordings were used. Results were also compared with those from shorter segments of the data. Electrocardiogram (ECG), electrodermal activity (EDA) and Respiration (RSP) data were recorded. Particular attention was paid to COVID risk mitigation, and to parameter tuning for the CEPS measures. For comparison, data were processed using Kubios HRV, RR-APET and DynamicalSystems.jl software. We also compared findings for ECG RR interval (RRi) data resampled at 4 Hz (4R) or 10 Hz (10R), and non-resampled (noR). In total, we used around 190–220 measures from CEPS at various scales, depending on the analysis undertaken, with our investigation focused on three families of measures: 22 fractal dimension (FD) measures, 40 heart rate asymmetries or measures derived from PoincarĂ© plots (HRA), and 8 measures based on permutation entropy (PE). Results: FDs for the RRi data differentiated strongly between breathing rates, whether data were resampled or not, increasing between 5 and 7 breaths per minute (BrPM). Largest effect sizes for RRi (4R and noR) differentiation between breathing rates were found for the PE-based measures. Measures that both differentiated well between breathing rates and were consistent across different RRi data lengths (1–5 min) included five PE-based (noR) and three FDs (4R). Of the top 12 measures with short-data values consistently within ± 5% of their values for the 5-min data, five were FDs, one was PE-based, and none were HRAs. Effect sizes were usually greater for CEPS measures than for those implemented in DynamicalSystems.jl. Conclusion: The updated CEPS software enables visualisation and analysis of multichannel physiological data using a variety of established and recently introduced complexity entropy measures. Although equal resampling is theoretically important for FD estimation, it appears that FD measures may also be usefully applied to non-resampled data

    Permutation distribution clustering and structural equation model trees

    Get PDF
    The primary goal of this thesis is to present novel methodologies for the exploratory analysis of psychological data sets that support researchers in informed theory development. Psychological data analysis bears a long tradition of confirming hypotheses generated prior to data collection. However, in practical research, the following two situations are commonly observed: In the first instance, there are no initial hypotheses about the data. In that case, there is no model available and one has to resort to uninformed methods to reveal structure in the data. In the second instance, existing models that reflect prior hypotheses need to be extended and improved, thereby altering and renewing hypotheses about the data and refining descriptions of the observed phenomena. This dissertation introduces a novel method for the exploratory analysis of psychological data sets for each of the two situations. Both methods focus on time series analysis, which is particularly interesting for the analysis of psychophysiological data and longitudinal data typically collected by developmental psychologists. Nonetheless, the methods are generally applicable and useful for other fields that analyze time series data, e.g., sociology, economics, neuroscience, and genetics. The first part of the dissertation proposes a clustering method for time series. A dissimilarity measure of time series based on the permutation distribution is developed. Employing this measure in a hierarchical scheme allows for a novel clustering method for time series based on their relative complexity: Permutation Distribution Clustering (PDC). Two methods for the determination of the number of distinct clusters are discussed based on a statistical and an information-theoretic criterion. Structural Equation Models (SEMs) constitute a versatile modeling technique, which is frequently employed in psychological research. The second part of the dissertation introduces an extension of SEMs to Structural Equation Modeling Trees (SEM Trees). SEM Trees describe partitions of a covariate-space which explain differences in the model parameters. They can provide solutions in situations in which hypotheses in the form of a model exist but may potentially be refined by integrating other variables. By harnessing the full power of SEM, they represent a general data analysis technique that can be used for both time series and non-time series data. SEM Trees algorithmically refine initial models of the sample and thus support researchers in theory development. This thesis includes demonstrations of the methods on simulated as well as on real data sets, including applications of SEM Trees to longitudinal models of cognitive development and cross-sectional cognitive factor models, and applications of PDC on psychophysiological data, including electroencephalographic, electrocardiographic, and genetic data.Ziel dieser Arbeit ist der Entwurf von explorativen Analysemethoden fĂŒr DatensĂ€tze aus der Psychologie, um Wissenschaftler bei der Entwicklung fundierter Theorien zu unterstĂŒtzen. Die Arbeit ist motiviert durch die Beobachtung, dass die klassischen Auswertungsmethoden fĂŒr psychologische DatensĂ€tze auf der Tradition grĂŒnden, Hypothesen zu testen, die vor der Datenerhebung aufgestellt wurden. Allerdings treten die folgenden beiden Situationen im Alltag der Datenauswertung hĂ€ufig auf: (1) es existieren keine Hypothesen ĂŒber die Daten und damit auch kein Modelle. Der Wissenschaftler muss also auf uninformierte Methoden zurĂŒckgreifen, um Strukturen und Ähnlichkeiten in den Daten aufzudecken. (2) Modelle sind vorhanden, die Hypothesen ĂŒber die Daten widerspiegeln, aber die Stichprobe nur unzureichend abbilden. In diesen FĂ€llen mĂŒssen die existierenden Modelle und damit Hypothesen verĂ€ndert und erweitert werden, um die Beschreibung der beobachteten PhĂ€nomene zu verfeinern. Die vorliegende Dissertation fĂŒhrt fĂŒr beide FĂ€lle je eine neue Methode ein, die auf die explorative Analyse psychologischer Daten zugeschnitten ist. Gleichwohl sind beide Methoden fĂŒr alle Bereiche nĂŒtzlich, in denen Zeitreihendaten analysiert werden, wie z.B. in der Soziologie, den Wirtschaftswissenschaften, den Neurowissenschaften und der Genetik. Der erste Teil der Arbeit schlĂ€gt ein Clusteringverfahren fĂŒr Zeitreihen vor. Dieses basiert auf einem Ähnlichkeitsmaß zwischen Zeitreihen, das auf die Permutationsverteilung der eingebetteten Zeitreihen zurĂŒckgeht. Dieses Maß wird mit einem hierarchischen Clusteralgorithmus kombiniert, um Zeitreihen nach ihrer KomplexitĂ€t in homogene Gruppen zu ordnen. Auf diese Weise entsteht die neue Methode der Permutationsverteilungs-basierten Clusteranalyse (PDC). Zwei Methoden zur Bestimmung der Anzahl von separaten Clustern werden hergeleitet, einmal auf Grundlage von statistischen Tests und einmal basierend auf informationstheoretischen Kriterien. Der zweite Teil der Arbeit erweitert Strukturgleichungsmodelle (SEM), eine vielseitige Modellierungstechnik, die in der Psychologie weit verbreitet ist, zu Strukturgleichungsmodell-BĂ€umen (SEM Trees). SEM Trees beschreiben rekursive Partitionen eines Raumes beobachteter Variablen mit maximalen Unterschieden in den Modellparametern eines SEMs. In Situationen, in denen Hypothesen in Form eines Modells existieren, können SEM Trees sie verfeinern, indem sie automatisch Variablen finden, die Unterschiede in den Modellparametern erklĂ€ren. Durch die hohe FlexibilitĂ€t von SEMs, können eine Vielzahl verschiedener Modelle mit SEM Trees erweitert werden. Die Methode eignet sich damit fĂŒr die Analyse sowohl von Zeitreihen als auch von Nicht-Zeitreihen. SEM Trees verfeinern algorithmisch anfĂ€ngliche Hypothesen und unterstĂŒtzen Forscher in der Weiterentwicklung ihrer Theorien. Die vorliegende Arbeit beinhaltet Demonstrationen der vorgeschlagenen Methoden auf realen DatensĂ€tzen, darunter Anwendungen von SEM Trees auf einem lĂ€ngsschnittlichen Wachstumsmodell kognitiver FĂ€higkeiten und einem querschnittlichen kognitiven Faktor Modell, sowie Anwendungen des PDC auf verschiedenen psychophsyiologischen Zeitreihen

    Recent Trends in Computational Research on Diseases

    Get PDF
    Recent advances in information technology have brought forth a paradigm shift in science, especially in the biology and medical fields. Statistical methodologies based on high-performance computing and big data analysis are now indispensable for the qualitative and quantitative understanding of experimental results. In fact, the last few decades have witnessed drastic improvements in high-throughput experiments in health science, for example, mass spectrometry, DNA microarray, next generation sequencing, etc. Those methods have been providing massive data involving four major branches of omics (genomics, transcriptomics, proteomics, and metabolomics). Information about amino acid sequences, protein structures, and molecular structures are fundamental data for the prediction of bioactivity of chemical compounds when screening drugs. On the other hand, cell imaging, clinical imaging, and personal healthcare devices are also providing important data concerning the human body and disease. In parallel, various methods of mathematical modelling such as machine learning have developed rapidly. All of these types of data can be utilized in computational approaches to understand disease mechanisms, diagnosis, prognosis, drug discovery, drug repositioning, disease biomarkers, driver mutations, copy number variations, disease pathways, and much more. In this Special Issue, we have published 8 excellent papers dedicated to a variety of computational problems in the biomedical field from the genomic level to the whole-person physiological level

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore