627 research outputs found

    SemImput: Bridging Semantic Imputation with Deep Learning for Complex Human Activity Recognition

    Get PDF
    The recognition of activities of daily living (ADL) in smart environments is a well-known and an important research area, which presents the real-time state of humans in pervasive computing. The process of recognizing human activities generally involves deploying a set of obtrusive and unobtrusive sensors, pre-processing the raw data, and building classification models using machine learning (ML) algorithms. Integrating data from multiple sensors is a challenging task due to dynamic nature of data sources. This is further complicated due to semantic and syntactic differences in these data sources. These differences become even more complex if the data generated is imperfect, which ultimately has a direct impact on its usefulness in yielding an accurate classifier. In this study, we propose a semantic imputation framework to improve the quality of sensor data using ontology-based semantic similarity learning. This is achieved by identifying semantic correlations among sensor events through SPARQL queries, and by performing a time-series longitudinal imputation. Furthermore, we applied deep learning (DL) based artificial neural network (ANN) on public datasets to demonstrate the applicability and validity of the proposed approach. The results showed a higher accuracy with semantically imputed datasets using ANN. We also presented a detailed comparative analysis, comparing the results with the state-of-the-art from the literature. We found that our semantic imputed datasets improved the classification accuracy with 95.78% as a higher one thus proving the effectiveness and robustness of learned models

    Strongly possible functional dependencies for SQL

    Get PDF
    Missing data is a large-scale challenge to research and investigate. It reduces the statistical power and produces negative consequences that may introduce selection bias on the data. Many approaches to handle this problem have been introduced. The main approaches suggested are either missing values to be ignored (removed) or imputed (filled in) with new values. This paper uses the second method. Possible worlds and possible and certain keys were introduced in Köhler et.al., and by Levene et.al. Köhler and Link introduced certain functional dependencies (c-FD) as a natural complement to Lien's class of possible functional dependencies (p-FD). Weak and strong functional dependencies were studied by Levene and Loizou. We introduced the intermediate concept of strongly possible worlds that are obtained by imputing values already existing in the table in a preceding paper. This results in strongly possible keys (spKey's) and strongly possible functional dependencies (spFD's). We give a polynomial algorithm to verify a single spKey and show that in general, it is NP-complete to verify an arbitrary collection of spKeys. We give a graph-theoretical characterization of the validity of a given spFD X →sp Y. We show, that the complexity to verify a single strongly possible functional dependency is NP-complete in general, then we introduce some cases when verifying a single spFD can be done in polynomial time. As a step forward axiomatization of spFD's, the rules given for weak and strong functional dependencies are checked. Appropriate weakenings of those that are not sound for spFD's are listed. The interaction between spFD's and spKey's and certain keys is studied. Furthermore, a graph theoretical characterization of implication between singular attribute spFD's is given

    Finding Temporal Patterns in Noisy Longitudinal Data: A Study in Diabetic Retinopathy

    Get PDF
    This paper describes an approach to temporal pattern mining using the concept of user defined temporal prototypes to define the nature of the trends of interests. The temporal patterns are defined in terms of sequences of support values associated with identified frequent patterns. The prototypes are defined mathematically so that they can be mapped onto the temporal patterns. The focus for the advocated temporal pattern mining process is a large longitudinal patient database collected as part of a diabetic retinopathy screening programme, The data set is, in itself, also of interest as it is very noisy (in common with other similar medical datasets) and does not feature a clear association between specific time stamps and subsets of the data. The diabetic retinopathy application, the data warehousing and cleaning process, and the frequent pattern mining procedure (together with the application of the prototype concept) are all described in the paper. An evaluation of the frequent pattern mining process is also presented
    corecore