24 research outputs found

    Enabling privacy-aware interoperable and quality IoT data sharing with context

    Get PDF
    Funding: This research has been supported by the European Union projects funded under Horizon 2020 research and innovation program (smashHit), grant agreement 871477 and in part by the European Union Horizon Europe project UPCAST under grant agreement number 101093216.Sharing Internet of Things (IoT) data across different sectors, such as in smart cities, becomes complex due to heterogeneity. This poses challenges related to a lack of interoperability, data quality issues and lack of context information, and a lack of data veracity(or accuracy). In addition, there are privacy concerns as IoT data may contain personally identifiable information. To address the above challenges, this paper presents a novel semantic technology-based framework that enables data sharing in a GDPR-compliant manner while ensuring that the data shared is interoperable, contains required context information, is of acceptable quality, and is accurate and trustworthy. The proposed framework also accounts for the edge/fog, an upcoming computing paradigm for the IoT to support real-time decisions. We evaluate the performance of the proposed framework with two different edge and fog-edge scenarios using resource-constrained IoT devices, such as the Raspberry Pi. In addition, we also evaluate shared data quality, interoperability and veracity. Our key finding is that the proposed framework can be employed on IoT devices with limited resources due to its low CPU and memory utilization for analytics operations and data transformation and migration operations. The low overhead of the framework supports real-time decision making. In addition, the 100% accuracy of our evaluation of the data quality and veracity based on 180 different observations demonstrates that the proposed framework can guarantee both data quality and veracityPeer reviewe

    The smashHitCore Ontology for GDPR-Compliant Sensor Data Sharing in Smart Cities

    Get PDF
    The adoption of the General Data Protection Regulation (GDPR) has resulted in a significant shift in how the data of European Union citizens is handled. A variety of data sharing challenges in scenarios such as smart cities have arisen, especially when attempting to semantically represent GDPR legal bases, such as consent, contracts and the data types and specific sources related to them. Most of the existing ontologies that model GDPR focus mainly on consent. In order to represent other GDPR bases, such as contracts, multiple ontologies need to be simultaneously reused and combined, which can result in inconsistent and conflicting knowledge representation. To address this challenge, we present the smashHitCore ontology. smashHitCore provides a unified and coherent model for both consent and contracts, as well as the sensor data and data processing associated with them. The ontology was developed in response to real-world sensor data sharing use cases in the insurance and smart city domains. The ontology has been successfully utilised to enable GDPR-complaint data sharing in a connected car for insurance use cases and in a city feedback system as part of a smart city use case

    Data Protection by Design Tool for Automated GDPR Compliance Verification Based on Semantically Modeled Informed Consent

    No full text
    The enforcement of the GDPR in May 2018 has led to a paradigm shift in data protection. Organizations face significant challenges, such as demonstrating compliance (or auditability) and automated compliance verification due to the complex and dynamic nature of consent, as well as the scale at which compliance verification must be performed. Furthermore, the GDPR’s promotion of data protection by design and industrial interoperability requirements has created new technical challenges, as they require significant changes in the design and implementation of systems that handle personal data. We present a scalable data protection by design tool for automated compliance verification and auditability based on informed consent that is modeled with a knowledge graph. Automated compliance verification is made possible by implementing a regulation-to-code process that translates GDPR regulations into well-defined technical and organizational measures and, ultimately, software code. We demonstrate the effectiveness of the tool in the insurance and smart cities domains. We highlight ways in which our tool can be adapted to other domains

    HPC Cloud traces for better cloud service reliability

    No full text
    This dataset includes system metrics (anonymised) such as CPU and memory utilisation, as well as hard drive metrics from SMART (Self-Monitoring, Analysis, and Reporting Technology). </span

    Improving decision making using semantic web technologies

    No full text
    With the rapid advance of technology, we are moving towards replacing humans in decision making–the employment of robotics and computerised systems for production and delivery and autonomous cars in the travel sector. The focus is placed on the use of techniques, such as machine learning and deep learning. However, despite advances in machine learning and deep learning, they are incapable of modelling the relationships that are present in the real world, which are necessary for making a decision. For example, automating sociotechnical systems requires an understanding of both human and technological aspects and how they influence one another. Using machine learning, we can not model the relationships of a sociotechnical systems. Semantic Web technologies, which is based on the concept of linked-data technology, can represent relationships in a more realistic way like in the real world, and be useful to make better decisions. The study looks at the use of Semantic Web technologies, namely ontologies and knowledge graphs to improve decision making process

    Survey Data: Towards Improving Prediction Accuracy and User-Level Explainability Using Deep Learning and Knowledge Graphs: A Study on Cassava Disease (Original data)

    No full text
    This data is part of the paper &quot;Towards improving prediction accuracy and user-level explainability using deep learning and knowledge graphs: A study on cassava disease&quot;</span

    Range sensor overview and blind-zone reduction of autonomous vehicle shuttles

    No full text
    In recent years, with the advancement in sensor technologies, computing technologies and artificial intelligence, the long-sought autonomous vehicles (AVs) have become a reality. Many AVs today are already driving on the roads. Still, we have not reached full autonomy. Sensors which allow AVs to perceive the surroundings are keys to the success of AVs to reach full autonomy. However, this requires an understanding of sensor configurations, performance and sensor placements. In this paper, we present our experience on sensors obtained from AV shuttle ise Auto. An AV shuttle ise Auto designed and developed in Tallinn University of Technology is used as an experimental platform for sensor configuration and set-up

    Question answering over knowledge graphs : A graph-driven approach

    No full text
    With the growth of knowledge graphs (KGs), question answering systems make the KGs easily accessible for end-users. Question answering over KGs aims to provide crisp answers to natural language questions across facts stored in the KGs. This paper proposes a graph-driven approach to answer questions over a KG through four steps, including (1) knowledge subgraph construction, (2) question graph construction, (3) graph matching, and (4) query execution. Given an input question, a knowledge subgraph, which is likely to include the answer is extracted to reduce the KG's search space. A graph, named question graph, is built to represent the question's intention. Then, the question graph is matched over the knowledge subgraph to find a query graph corresponding to a SPARQL query. Finally, the corresponding SPARQL is executed to return the answers to the question. The performance of the proposed approach is empirically evaluated using the 6th Question Answering over Linked Data Challenge (QALD-6). Experimental results show that the proposed approach improves the performance compared to the-state-of-art in terms of recall, precision, and F1-score

    CCoDaMiC: a framework for coherent coordination of data migration and computation platforms

    No full text
    The amount of data generated by millions of connected IoT sensors and devices is growing exponentially. The need to extract relevant information from this data in modern and future generation computing system, necessitates efficient data handling and processing platforms that can migrate such big data from one location to other locations seamlessly and securely, and can provide a way to preprocess and analyze that data before migrating to the final destination. Various data pipeline architectures have been proposed allowing the data administrator/user to handle the data migration operation efficiently. However, the modern data pipeline architectures do not offer built-in functionalities for ensuring data veracity, which includes data accuracy, trustworthiness and security. Furthermore, allowing the intermediate data to be processed, especially in the serverless computing environment, is becoming a cumbersome task. In order to fill this research gap, this paper introduces an efficient and novel data pipeline architecture, named as CCoDaMiC (Coherent Coordination of Data Migration and Computation), which brings both the data migration operation and its computation together into one place. This also ensures that the data delivered to the next destination/pipeline block is accurate and secure. The proposed framework is implemented in private OpenStack environment and Apache Nifi

    Knowledge graph based hard drive failure prediction

    No full text
    The hard drive is one of the important components of a computing system, and its failure can lead to both system failure and data loss. Therefore, the reliability of a hard drive is very important. Realising this importance, a number of studies have been conducted and many are still ongoing to improve hard drive failure prediction. Most of those studies rely solely on machine learning, and a few others on semantic technology. The studies based on machine learning, despite promising results, lack context-awareness such as how failures are related or what other factors, such as humidity, influence the failure of hard drives. Semantic technology, on the other hand, by means of ontologies and knowledge graphs (KGs), is able to provide the context-awareness that machine learning-based studies lack. However, the studies based on semantic technology lack the advantages of machine learning, such as the ability to learn a pattern and make predictions based on learned patterns. Therefore, in this paper, leveraging the benefits of both machine learning (ML) and semantic technology, we present our study, knowledge graph-based hard drive failure prediction. The experimental results demonstrate that our proposed method achieves higher accuracy in comparison to the current state of the art
    corecore