358 research outputs found

    Segment Parameter Labelling in MCMC Mean-Shift Change Detection

    Get PDF
    This work addresses the problem of segmentation in time series data with respect to a statistical parameter of interest in Bayesian models. It is common to assume that the parameters are distinct within each segment. As such, many Bayesian change point detection models do not exploit the segment parameter patterns, which can improve performance. This work proposes a Bayesian mean-shift change point detection algorithm that makes use of repetition in segment parameters, by introducing segment class labels that utilise a Dirichlet process prior. The performance of the proposed approach was assessed on both synthetic and real world data, highlighting the enhanced performance when using parameter labelling

    Multimodal federated learning on IoT data

    Get PDF
    Federated learning is proposed as an alternative to centralized machine learning since its client-server structure provides better privacy protection and scalability in real-world applications. In many applications, such as smart homes with Internet-of-Things (IoT) devices, local data on clients are generated from different modalities such as sensory, visual, and audio data. Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients. In addition, we propose a multimodal FedAvg algorithm to aggregate local autoencoders trained on different data modalities. We use the learned global autoencoder for a downstream classification task with the help of auxiliary labelled data on the server. We empirically evaluate our framework on different modalities including sensory data, depth camera videos, and RGB camera videos. Our experimental results demonstrate that introducing data from multiple modalities into federated learning can improve its classification performance. In addition, we can use labelled data from only one modality for supervised learning on the server and apply the learned model to testing data from other modalities to achieve decent F1 scores (e.g., with the best performance being higher than 60%), especially when combining contributions from both unimodal clients and multimodal clients

    Automated Semantic Knowledge Acquisition From Sensor Data

    Get PDF
    The gathering of real-world data is facilitated by many pervasive data sources such as sensor devices and smartphones. The abundance of the sensory data raises the need to make the data easily available and understandable for the potential users and applications. Using semantic enhancements is one approach to structure and organize the data and to make it processable and interoperable by machines. In particular, ontologies are used to represent information and their relations in machine interpretable forms. In this context, a significant amount of work has been done to create real-world data description ontologies and data description models; however, little effort has been done in creating and constructing meaningful topical ontologies from a vast amount of sensory data by automated processes. Topical ontologies represent the knowledge from a certain domain providing a basic understanding of the concepts that serve as building blocks for further processing. There is a lack of solution that construct the structure and relations of ontologies based on real-world data. To address this challenge, we introduce a knowledge acquisition method that processes real-world data to automatically create and evolve topical ontologies based on rules that are automatically extracted from external sources. We use an extended k-means clustering method and apply a statistic model to extract and link relevant concepts from the raw sensor data and represent them in the form of a topical ontology. We use a rule-based system to label the concepts and make them understandable for the human user or semantic analysis and reasoning tools and software. The evaluation of our work shows that the construction of a topological ontology from raw sensor data is achievable with only small construction errors

    A Linked-Data Model for Semantic Sensor Streams

    Get PDF
    This paper describes a semantic modelling scheme, a naming convention and a data distribution mechanism for sensor streams. The proposed solutions address important challenges to deal with large-scale sensor data emerging from the Internet of Things resources. While there are significant numbers of recent work on semantic sensor networks, semantic annotation and representation frameworks, there has been less focus on creating efficient and flexible schemes to describe the sensor streams and the observation and measurement data provided via these streams and to name and resolve the requests to these data. We present our semantic model to describe the sensor streams, demonstrate an annotation and data distribution framework and evaluate our solutions with a set of sample datasets. The results show that our proposed solutions can scale for large number of sensor streams with different types of data and various attributes

    Discovering behavioural patterns using conversational technology for in-home health and well-being monitoring

    Get PDF
    Advancements in conversational AI have created unparalleled opportunities to promote the independence and well-being of older adults, including people living with dementia (PLWD). However, conversational agents have yet to demonstrate a direct impact in supporting target populations at home, particularly with long-term user benefits and clinical utility. We introduce an infrastructure fusing in-home activity data captured by Internet of Things (IoT) technologies with voice interactions using conversational technology (Amazon Alexa). We collect 3103 person-days of voice and environmental data across 14 households with PLWD to identify behavioural patterns. Interactions include an automated well-being questionnaire and 10 topics of interest, identified using topic modelling. Although a significant decrease in conversational technology usage was observed after the novelty phase across the cohort, steady state data acquisition for modelling was sustained. We analyse household activity sequences preceding or following Alexa interactions through pairwise similarity and clustering methods. Our analysis demonstrates the capability to identify individual behavioural patterns, changes in those patterns and the corresponding time periods. We further report that households with PLWD continued using Alexa following clinical events (e.g., hospitalisations), which offers a compelling opportunity for proactive health and well-being data gathering related to medical changes. Results demonstrate the promise of conversational AI in digital health monitoring for ageing and dementia support and offer a basis for tracking health and deterioration as indicated by household activity, which can inform healthcare professionals and relevant stakeholders for timely interventions. Future work will use the bespoke behavioural patterns extracted to create more personalised AI conversations
    • …
    corecore