83 research outputs found

    An Unsupervised Algorithm for Segmenting Categorical Timeseries into Episodes

    Get PDF
    This paper describes an unsupervised algorithm for segmenting categorical time series into episodes. The Voting-Experts algorithm first collects statistics about the frequency and boundary entropy of ngrams, then passes a window over the series and has two ā€œexpert methodsā€ decide where in the window boundaries should be drawn. The algorithm successfully segments text into words in four languages. The algorithm also segments time series of robot sensor data into subsequences that represent episodes in the life of the robot. We claim that Voting-Experts finds meaningful episodes in categorical time series because it exploits two statistical characteristics of meaningful episodes

    The Minimum Description Length Principle for Pattern Mining: A Survey

    Full text link
    This is about the Minimum Description Length (MDL) principle applied to pattern mining. The length of this description is kept to the minimum. Mining patterns is a core task in data analysis and, beyond issues of efficient enumeration, the selection of patterns constitutes a major challenge. The MDL principle, a model selection method grounded in information theory, has been applied to pattern mining with the aim to obtain compact high-quality sets of patterns. After giving an outline of relevant concepts from information theory and coding, as well as of work on the theory behind the MDL and similar principles, we review MDL-based methods for mining various types of data and patterns. Finally, we open a discussion on some issues regarding these methods, and highlight currently active related data analysis problems

    Temporal-spatial Correlation Attention Network for Clinical Data Analysis in Intensive Care Unit

    Full text link
    In recent years, medical information technology has made it possible for electronic health record (EHR) to store fairly complete clinical data. This has brought health care into the era of "big data". However, medical data are often sparse and strongly correlated, which means that medical problems cannot be solved effectively. With the rapid development of deep learning in recent years, it has provided opportunities for the use of big data in healthcare. In this paper, we propose a temporal-saptial correlation attention network (TSCAN) to handle some clinical characteristic prediction problems, such as predicting death, predicting length of stay, detecting physiologic decline, and classifying phenotypes. Based on the design of the attention mechanism model, our approach can effectively remove irrelevant items in clinical data and irrelevant nodes in time according to different tasks, so as to obtain more accurate prediction results. Our method can also find key clinical indicators of important outcomes that can be used to improve treatment options. Our experiments use information from the Medical Information Mart for Intensive Care (MIMIC-IV) database, which is open to the public. Finally, we have achieved significant performance benefits of 2.0\% (metric) compared to other SOTA prediction methods. We achieved a staggering 90.7\% on mortality rate, 45.1\% on length of stay. The source code can be find: \url{https://github.com/yuyuheintju/TSCAN}

    Multichannel mixture models for time-series analysis and classification of engagement with multiple health services: An application to psychology and physiotherapy utilization patterns after traffic accidents

    Get PDF
    Background: Motor vehicle accidents (MVA) represent a significant burden on health systems globally. Tens of thousands of people are injured in Australia every year and may experience significant disability. Associated economic costs are substantial. There is little literature on the health service utilization patterns of MVA patients. To fill this gap, this study has been designed to investigate temporal patterns of psychology and physiotherapy service utilization following transport-related injuries. Method: De-identified compensation data was provided by the Australian Transport Accident Commission. Utilization of physiotherapy and psychology services was analysed. The datasets contained 788 psychology and 3115 physiotherapy claimants and 22,522 and 118,453 episodes of service utilization, respectively. 582 claimants used both services, and their data were preprocessed to generate multidimensional time series. Time series clustering was applied using a mixture of hidden Markov models to identify the main distinct patterns of service utilization. Combinations of hidden states and clusters were evaluated and optimized using the Bayesian information criterion and interpretability. Cluster membership was further investigated using static covariates and multinomial logistic regression, and classified using high-performing classifiers (extreme gradient boosting machine, random forest and support vector machine) with 5-fold cross-validation. Results: Four clusters of claimants were obtained from the clustering of the time series of service utilization. Service volumes and costs increased progressively from clusters 1 to 4. Membership of cluster 1 was positively associated with nerve damage and negatively associated with severe ABI and spinal injuries. Cluster 3 was positively associated with severe ABI, brain/head injury and psychiatric injury. Cluster 4 was positively associated with internal injuries. The classifiers were capable of classifying cluster membership with moderate to strong performance (AUC: 0.62ā€“0.96). Conclusion: The available time series of post-accident psychology and physiotherapy service utilization were coalesced into four clusters that were clearly distinct in terms of patterns of utilization. In addition, pre-treatment covariates allowed prediction of a claimantā€™s post-accident service utilization with reasonable accuracy. Such results can be useful for a range of decision-making processes, including the design of interventions aimed at improving claimant care and recovery

    Impact of annotation modality on label quality and model performance in the automatic assessment of laughter in-the-wild

    Full text link
    Laughter is considered one of the most overt signals of joy. Laughter is well-recognized as a multimodal phenomenon but is most commonly detected by sensing the sound of laughter. It is unclear how perception and annotation of laughter differ when annotated from other modalities like video, via the body movements of laughter. In this paper we take a first step in this direction by asking if and how well laughter can be annotated when only audio, only video (containing full body movement information) or audiovisual modalities are available to annotators. We ask whether annotations of laughter are congruent across modalities, and compare the effect that labeling modality has on machine learning model performance. We compare annotations and models for laughter detection, intensity estimation, and segmentation, three tasks common in previous studies of laughter. Our analysis of more than 4000 annotations acquired from 48 annotators revealed evidence for incongruity in the perception of laughter, and its intensity between modalities. Further analysis of annotations against consolidated audiovisual reference annotations revealed that recall was lower on average for video when compared to the audio condition, but tended to increase with the intensity of the laughter samples. Our machine learning experiments compared the performance of state-of-the-art unimodal (audio-based, video-based and acceleration-based) and multi-modal models for different combinations of input modalities, training label modality, and testing label modality. Models with video and acceleration inputs had similar performance regardless of training label modality, suggesting that it may be entirely appropriate to train models for laughter detection from body movements using video-acquired labels, despite their lower inter-rater agreement

    Peptide vocabulary analysis reveals ultra-conservation and homonymity in protein sequences

    Get PDF
    A new algorithm is presented for vocabulary analysis (word detection) in texts of human origin. It performs at 60%ā€“70% overall accuracy and greater than 80% accuracy for longer words, and approximately 85% sensitivity on Alice in Wonderland, a considerable improvement on previous methods. When applied to protein sequences, it detects short sequences analogous to words in human texts, i.e. intolerant to changes in spelling (mutation), and relatively contextindependent in their meaning (function). Some of these are homonyms of up to 7 amino acids, which can assume different structures in different proteins. Others are ultra-conserved stretches of up to 18 amino acids within proteins of less than 40% overall identity, reflecting extreme constraint or convergent evolution. Different species are found to have qualitatively different major peptide vocabularies, e.g. some are dominated by large gene families, while others are rich in simple repeats or dominated by internally repetitive proteins. This suggests the possibility of a peptide vocabulary signature, analogous to genome signatures in DNA. Homonyms may be useful in detecting convergent evolution and positive selection in protein evolution. Ultra-conserved words may be useful in identifying structures intolerant to substitution over long periods of evolutionary time

    Teaching a robot manipulation skills through demonstration

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Includes bibliographical references (p. 127-129).An automated software system has been developed to allow robots to learn a generalized motor skill from demonstrations given by a human operator. Data is captured using a teleoperation suit as a task is performed repeatedly on Leonardo, the Robotic Life group's anthropomorphic robot, in different parts of his workspace. Stereo vision and tactile feedback data are also captured. Joint and end effector motions are measured through time, and an improved Mean Squared Velocity [MSV] analysis is performed to segment motions into possible goal-directed streams. Further combinatorial selection of subsets of markers allows final episodic boundary selection and time alignment of tasks. The task trials are then analyzed spatially using radial basis functions [RBFs] to interpolate demonstrations to span his workspace, using the object position as the motion blending parameter. An analysis of the motions in the object coordinate space [with the origin defined at the object] and absolute world-coordinate space [with the origin defined at the base of the robot], and motion variances in both coordinate frames, leads to a measure [referred to here as objectivity] of how much any part of an action is absolutely oriented, and how much is object-based. A secondary RBF solution, using end effector paths in the object coordinate frame, provides precise end-effector positioning relative to the object. The objectivity measure is used to blend between these two solutions, using the initial RBF solution to preserve quality of motion, and the secondary end-effector objective RBF solution to increase the robot's capability to engage objects accurately and robustly.by Jeff Lieberman.S.M

    Artificial intelligence for decision making in energy demand-side response

    Get PDF
    This thesis examines the role and application of data-driven Artificial Intelligence (AI) approaches for the energy demand-side response (DR). It follows the point of view of a service provider company/aggregator looking to support its decision-making and operation. Overall, the study identifies data-driven AI methods as an essential tool and a key enabler for DR. The thesis is organised into two parts. It first provides an overview of AI methods utilised for DR applications based on a systematic review of over 160 papers, 40 commercial initiatives, and 21 large-scale projects. The reviewed work is categorised based on the type of AI algorithm(s) employed and the DR application area of the AI methods. The end of the first part of the thesis discusses the advantages and potential limitations of the reviewed AI techniques for different DR tasks and how they compare to traditional approaches. The second part of the thesis centres around designing machine learning algorithms for DR. The undertaken empirical work highlights the importance of data quality for providing fair, robust, and safe AI systems in DR ā€” a high-stakes domain. It furthers the state of the art by providing a structured approach for data preparation and data augmentation in DR to minimise propagating effects in the modelling process. The empirical findings on residential response behaviour show better response behaviour in households with internet access, air-conditioning systems, power-intensive appliances, and lower gas usage. However, some insights raise questions about whether the reported levels of consumersā€™ engagement in DR schemes translate to actual curtailment behaviour and the individual rationale of customer response to DR signals. The presented approach also proposes a reinforcement learning framework for the decision problem of an aggregator selecting a set of consumers for DR events. This approach can support an aggregator in leveraging small-scale flexibility resources by providing an automated end-to-end framework to select the set of consumers for demand curtailment during Demand-Side Response (DR) signals in a dynamic environment while considering a long-term view of their selection process

    From insights to innovations : data mining, visualization, and user interfaces

    Get PDF
    This thesis is about data mining (DM) and visualization methods for gaining insight into multidimensional data. Novel, exploratory data analysis tools and adaptive user interfaces are developed by tailoring and combining existing DM and visualization methods in order to advance in different applications. The thesis presents new visual data mining (VDM) methods that are also implemented in software toolboxes and applied to industrial and biomedical signals: First, we propose a method that has been applied to investigating industrial process data. The self-organizing map (SOM) is combined with scatterplots using the traditional color linking or interactive brushing. The original contribution is to apply color linked or brushed scatterplots and the SOM to visually survey local dependencies between a pair of attributes in different parts of the SOM. Clusters can be visualized on a SOM with different colors, and we also present how a color coding can be automatically obtained by using a proximity preserving projection of the SOM model vectors. Second, we present a new method for an (interactive) visualization of cluster structures in a SOM. By using a contraction model, the regular grid of a SOM visualization is smoothly changed toward a presentation that shows better the proximities in the data space. Third, we propose a novel VDM method for investigating the reliability of estimates resulting from a stochastic independent component analysis (ICA) algorithm. The method can be extended also to other problems of similar kind. As a benchmarking task, we rank independent components estimated on a biomedical data set recorded from the brain and gain a reasonable result. We also utilize DM and visualization for mobile-awareness and personalization. We explore how to infer information about the usage context from features that are derived from sensory signals. The signals originate from a mobile phone with on-board sensors for ambient physical conditions. In previous studies, the signals are transformed into descriptive (fuzzy or binary) context features. In this thesis, we present how the features can be transformed into higher-level patterns, contexts, by rather simple statistical methods: we propose and test using minimum-variance cost time series segmentation, ICA, and principal component analysis (PCA) for this purpose. Both time-series segmentation and PCA revealed meaningful contexts from the features in a visual data exploration. We also present a novel type of adaptive soft keyboard where the aim is to obtain an ergonomically better, more comfortable keyboard. The method starts from some conventional keypad layout, but it gradually shifts the keys into new positions according to the user's grasp and typing pattern. Related to the applications, we present two algorithms that can be used in a general context: First, we describe a binary mixing model for independent binary sources. The model resembles the ordinary ICA model, but the summation is replaced by the Boolean operator OR and the multiplication by AND. We propose a new, heuristic method for estimating the binary mixing matrix and analyze its performance experimentally. The method works for signals that are sparse enough. We also discuss differences on the results when using different objective functions in the FastICA estimation algorithm. Second, we propose "global iterative replacement" (GIR), a novel, greedy variant of a merge-split segmentation method. Its performance compares favorably to that of the traditional top-down binary split segmentation algorithm.reviewe
    • ā€¦
    corecore