12 research outputs found

    Clustering and Classification for Time Series Data in Visual Analytics: A Survey

    Get PDF
    Visual analytics for time series data has received a considerable amount of attention. Different approaches have been developed to understand the characteristics of the data and obtain meaningful statistics in order to explore the underlying processes, identify and estimate trends, make decisions and predict the future. The machine learning and visualization areas share a focus on extracting information from data. In this paper, we consider not only automatic methods but also interactive exploration. The ability to embed efficient machine learning techniques (clustering and classification) in interactive visualization systems is highly desirable in order to gain the most from both humans and computers. We present a literature review of some of the most important publications in the field and classify over 60 published papers from six different perspectives. This review intends to clarify the major concepts with which clustering or classification algorithms are used in visual analytics for time series data and provide a valuable guide for both new researchers and experts in the emerging field of integrating machine learning techniques into visual analytics

    Piecewise Linear Manifold Clustering

    Full text link
    This work studies the application of topological analysis to non-linear manifold clustering. A novel method, that exploits the data clustering structure, allows to generate a topological representation of the point dataset. An analysis of topological construction under different simulated conditions is performed to explore the capabilities and limitations of the method, and demonstrated statistically significant improvements in performance. Furthermore, we introduce a new information-theoretical validation measure for clustering, that exploits geometrical properties of clusters to estimate clustering compressibility, for evaluation of the clustering goodness-of-fit without any prior information about true class assignments. We show how the new validation measure, when used as regularization criteria, allows creation of clusters that are more informative. A final contribution is a new metaclustering technique that allows to create a model-based clustering beyond point and linear shaped structures. Driven by topological structure and our information-theoretical criteria, this technique provides structured view of the data on new comprehensive and interpretation level. Improvements of our clustering approach are demonstrated on a variety of synthetic and real datasets, including image and climatological data

    Mathematics and Digital Signal Processing

    Get PDF
    Modern computer technology has opened up new opportunities for the development of digital signal processing methods. The applications of digital signal processing have expanded significantly and today include audio and speech processing, sonar, radar, and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others. This Special Issue is aimed at wide coverage of the problems of digital signal processing, from mathematical modeling to the implementation of problem-oriented systems. The basis of digital signal processing is digital filtering. Wavelet analysis implements multiscale signal processing and is used to solve applied problems of de-noising and compression. Processing of visual information, including image and video processing and pattern recognition, is actively used in robotic systems and industrial processes control today. Improving digital signal processing circuits and developing new signal processing systems can improve the technical characteristics of many digital devices. The development of new methods of artificial intelligence, including artificial neural networks and brain-computer interfaces, opens up new prospects for the creation of smart technology. This Special Issue contains the latest technological developments in mathematics and digital signal processing. The stated results are of interest to researchers in the field of applied mathematics and developers of modern digital signal processing systems

    Machine learning for automatic analysis of affective behaviour

    Get PDF
    The automated analysis of affect has been gaining rapidly increasing attention by researchers over the past two decades, as it constitutes a fundamental step towards achieving next-generation computing technologies and integrating them into everyday life (e.g. via affect-aware, user-adaptive interfaces, medical imaging, health assessment, ambient intelligence etc.). The work presented in this thesis focuses on several fundamental problems manifesting in the course towards the achievement of reliable, accurate and robust affect sensing systems. In more detail, the motivation behind this work lies in recent developments in the field, namely (i) the creation of large, audiovisual databases for affect analysis in the so-called ''Big-Data`` era, along with (ii) the need to deploy systems under demanding, real-world conditions. These developments led to the requirement for the analysis of emotion expressions continuously in time, instead of merely processing static images, thus unveiling the wide range of temporal dynamics related to human behaviour to researchers. The latter entails another deviation from the traditional line of research in the field: instead of focusing on predicting posed, discrete basic emotions (happiness, surprise etc.), it became necessary to focus on spontaneous, naturalistic expressions captured under settings more proximal to real-world conditions, utilising more expressive emotion descriptions than a set of discrete labels. To this end, the main motivation of this thesis is to deal with challenges arising from the adoption of continuous dimensional emotion descriptions under naturalistic scenarios, considered to capture a much wider spectrum of expressive variability than basic emotions, and most importantly model emotional states which are commonly expressed by humans in their everyday life. In the first part of this thesis, we attempt to demystify the quite unexplored problem of predicting continuous emotional dimensions. This work is amongst the first to explore the problem of predicting emotion dimensions via multi-modal fusion, utilising facial expressions, auditory cues and shoulder gestures. A major contribution of the work presented in this thesis lies in proposing the utilisation of various relationships exhibited by emotion dimensions in order to improve the prediction accuracy of machine learning methods - an idea which has been taken on by other researchers in the field since. In order to experimentally evaluate this, we extend methods such as the Long Short-Term Memory Neural Networks (LSTM), the Relevance Vector Machine (RVM) and Canonical Correlation Analysis (CCA) in order to exploit output relationships in learning. As it is shown, this increases the accuracy of machine learning models applied to this task. The annotation of continuous dimensional emotions is a tedious task, highly prone to the influence of various types of noise. Performed real-time by several annotators (usually experts), the annotation process can be heavily biased by factors such as subjective interpretations of the emotional states observed, the inherent ambiguity of labels related to human behaviour, the varying reaction lags exhibited by each annotator as well as other factors such as input device noise and annotation errors. In effect, the annotations manifest a strong spatio-temporal annotator-specific bias. Failing to properly deal with annotation bias and noise leads to an inaccurate ground truth, and therefore to ill-generalisable machine learning models. This deems the proper fusion of multiple annotations, and the inference of a clean, corrected version of the ``ground truth'' as one of the most significant challenges in the area. A highly important contribution of this thesis lies in the introduction of Dynamic Probabilistic Canonical Correlation Analysis (DPCCA), a method aimed at fusing noisy continuous annotations. By adopting a private-shared space model, we isolate the individual characteristics that are annotator-specific and not shared, while most importantly we model the common, underlying annotation which is shared by annotators (i.e., the derived ground truth). By further learning temporal dynamics and incorporating a time-warping process, we are able to derive a clean version of the ground truth given multiple annotations, eliminating temporal discrepancies and other nuisances. The integration of the temporal alignment process within the proposed private-shared space model deems DPCCA suitable for the problem of temporally aligning human behaviour; that is, given temporally unsynchronised sequences (e.g., videos of two persons smiling), the goal is to generate the temporally synchronised sequences (e.g., the smile apex should co-occur in the videos). Temporal alignment is an important problem for many applications where multiple datasets need to be aligned in time. Furthermore, it is particularly suitable for the analysis of facial expressions, where the activation of facial muscles (Action Units) typically follows a set of predefined temporal phases. A highly challenging scenario is when the observations are perturbed by gross, non-Gaussian noise (e.g., occlusions), as is often the case when analysing data acquired under real-world conditions. To account for non-Gaussian noise, a robust variant of Canonical Correlation Analysis (RCCA) for robust fusion and temporal alignment is proposed. The model captures the shared, low-rank subspace of the observations, isolating the gross noise in a sparse noise term. RCCA is amongst the first robust variants of CCA proposed in literature, and as we show in related experiments outperforms other, state-of-the-art methods for related tasks such as the fusion of multiple modalities under gross noise. Beyond private-shared space models, Component Analysis (CA) is an integral component of most computer vision systems, particularly in terms of reducing the usually high-dimensional input spaces in a meaningful manner pertaining to the task-at-hand (e.g., prediction, clustering). A final, significant contribution of this thesis lies in proposing the first unifying framework for probabilistic component analysis. The proposed framework covers most well-known CA methods, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), providing further theoretical insights into the workings of CA. Moreover, the proposed framework is highly flexible, enabling novel CA methods to be generated by simply manipulating the connectivity of latent variables (i.e. the latent neighbourhood). As shown experimentally, methods derived via the proposed framework outperform other equivalents in several problems related to affect sensing and facial expression analysis, while providing advantages such as reduced complexity and explicit variance modelling.Open Acces

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Activity related biometrics for person authentication

    No full text
    One of the major challenges in human-machine interaction has always been the development of such techniques that are able to provide accurate human recognition, so as to other either personalized services or to protect critical infrastructures from unauthorized access. To this direction, a series of well stated and efficient methods have been proposed mainly based on biometric characteristics of the user. Despite the significant progress that has been achieved recently, there are still many open issues in the area, concerning not only the performance of the systems but also the intrusiveness of the collecting methods. The current thesis deals with the investigation of novel, activity-related biometric traits and their potential for multiple and unobtrusive authentication based on the spatiotemporal analysis of human activities. In particular, it starts with an extensive bibliography review regarding the most important works in the area of biometrics, exhibiting and justifying in parallel the transition that is performed from the classic biometrics to the new concept of behavioural biometrics. Based on previous works related to the human physiology and human motion and motivated by the intuitive assumption that different body types and different characters would produce distinguishable, and thus, valuable for biometric verification, activity-related traits, a new type of biometrics, the so-called prehension biometrics (i.e. the combined movement of reaching, grasping activities), is introduced and thoroughly studied herein. The analysis is performed via the so-called Activity hyper-Surfaces that form a dynamic movement-related manifold for the extraction of a series of behavioural features. Thereafter, the focus is laid on the extraction of continuous soft biometric features and their efficient combination with state-of-the-art biometric approaches towards increased authentication performance and enhanced security in template storage via Soft biometric Keys. In this context, a novel and generic probabilistic framework is proposed that produces an enhanced matching probability based on the modelling of the systematic error induced during the estimation of the aforementioned soft biometrics and the efficient clustering of the soft biometric feature space. Next, an extensive experimental evaluation of the proposed methodologies follows that effectively illustrates the increased authentication potential of the prehension-related biometrics and the significant advances in the recognition performance by the probabilistic framework. In particular, the prehension biometrics related biometrics is applied on several databases of ~100 different subjects in total performing a great variety of movements. The carried out experiments simulate both episodic and multiple authentication scenarios, while contextual parameters, (i.e. the ergonomic-based quality factors of the human body) are also taken into account. Furthermore, the probabilistic framework for augmenting biometric recognition via soft biometrics is applied on top of two state-of-art biometric systems, i.e. a gait recognition (> 100 subjects)- and a 3D face recognition-based one (~55 subjects), exhibiting significant advances to their performance. The thesis is concluded with an in-depth discussion summarizing the major achievements of the current work, as well as some possible drawbacks and other open issues of the proposed approaches that could be addressed in future works.Open Acces

    Modelling of Electrical Appliance Signatures for Energy Disaggregation

    Get PDF
    The rapid development of technology in the electrical sector within the last 20 years has led to growing electric power needs through the increased number of electrical appliances and automation of tasks. In contrary, reduction of the overall energy consumption as well as efficient energy management are needed, in order to reduce global warming and meet the global climate protection goals. These requirements have led to the recent adoption of smart-meters and smart-grids, as well as to the rise of Non-Intrusive Load Monitoring. Non-Intrusive Load Monitoring aims to extract the energy consumption of individual electrical appliances through disaggregation of the total power consumption as measured by a single smart meter at the inlet of a household. Therefore, Non-Intrusive Load Monitoring is a highly under-determined problem which aims to estimate multiple variables from a single observation, thus is impossible to be solved analytical. In order to find accurate estimates of the unknown variables three fundamentally different approaches, namely deep-learning, pattern matching and single-channel source separation, have been investigated in the literature in order to solve the Non-Intrusive Load Monitoring problem. While Non-Intrusive Load Monitoring has multiple areas of application, including energy reduction through consumer awareness, load scheduling for energy cost optimization or reduction of peak demands, the focus of this thesis is especially on the performance of the disaggregation algorithm, the key part of the Non-Intrusive Load Monitoring architecture. In detail, optimizations are proposed for all three architectures, while the focus lies on deep-learning based approaches. Furthermore, the transferability capability of the deep-learning based approach is investigated and a NILM specific transfer architecture is proposed. The main contribution of the thesis is threefold. First, with Non-Intrusive Load Monitoring being a time-series problem incorporation of temporal information is crucial for accurate modelling of the appliance signatures and the change of signatures over time. Therefore, previously published architectures based on deep-learning have focused on utilizing regression models which intrinsically incorporating temporal information. In this work, the idea of incorporating temporal information is extended especially through modelling temporal patterns of appliances not only in the regression stage, but also in the input feature vector, i.e. by using fractional calculus, feature concatenation or high-frequency double Fourier integral signatures. Additionally, multi variance matching is utilized for Non-Intrusive Load Monitoring in order to have additional degrees of freedom for a pattern matching based solution. Second, with Non-Intrusive Load Monitoring systems expected to operate in realtime as well as being low-cost applications, computational complexity as well as storage limitations must be considered. Therefore, in this thesis an approximation for frequency domain features is presented in order to account for a reduction in computational complexity. Furthermore, investigations of reduced sampling frequencies and their impact on disaggregation performance has been evaluated. Additionally, different elastic matching techniques have been compared in order to account for reduction of training times and utilization of models without trainable parameters. Third, in order to fully utilize Non-Intrusive Load Monitoring techniques accurate transfer models, i.e. models which are trained on one data domain and tested on a different data domain, are needed. In this context it is crucial to transfer time-variant and manufacturer dependent appliance signatures to manufacturer invariant signatures, in order to assure accurate transfer modelling. Therefore, a transfer learning architecture specifically adapted to the needs of Non-Intrusive Load Monitoring is presented. Overall, this thesis contributes to the topic of Non-Intrusive Load Monitoring improving the performance of the disaggregation stage while comparing three fundamentally different approaches for the disaggregation problem

    Visual analytics methods for retinal layers in optical coherence tomography data

    Get PDF
    Optical coherence tomography is an important imaging technology for the early detection of ocular diseases. Yet, identifying substructural defects in the 3D retinal images is challenging. We therefore present novel visual analytics methods for the exploration of small and localized retinal alterations. Our methods reduce the data complexity and ensure the visibility of relevant information. The results of two cross-sectional studies show that our methods improve the detection of retinal defects, contributing to a deeper understanding of the retinal condition at an early stage of disease.Die optische Kohärenztomographie ist ein wichtiges Bildgebungsverfahren zur Früherkennung von Augenerkrankungen. Die Identifizierung von substrukturellen Defekten in den 3D-Netzhautbildern ist jedoch eine Herausforderung. Wir stellen daher neue Visual-Analytics-Methoden zur Exploration von kleinen und lokalen Netzhautveränderungen vor. Unsere Methoden reduzieren die Datenkomplexität und gewährleisten die Sichtbarkeit relevanter Informationen. Die Ergebnisse zweier Querschnittsstudien zeigen, dass unsere Methoden die Erkennung von Netzhautdefekten in frühen Krankheitsstadien verbessern
    corecore