252 research outputs found

    Contributions to Securing Software Updates in IoT

    Get PDF
    The Internet of Things (IoT) is a large network of connected devices. In IoT, devices can communicate with each other or back-end systems to transfer data or perform assigned tasks. Communication protocols used in IoT depend on target applications but usually require low bandwidth. On the other hand, IoT devices are constrained, having limited resources, including memory, power, and computational resources. Considering these limitations in IoT environments, it is difficult to implement best security practices. Consequently, network attacks can threaten devices or the data they transfer. Thus it is crucial to react quickly to emerging vulnerabilities. These vulnerabilities should be mitigated by firmware updates or other necessary updates securely. Since IoT devices usually connect to the network wirelessly, such updates can be performed Over-The-Air (OTA). This dissertation presents contributions to enable secure OTA software updates in IoT. In order to perform secure updates, vulnerabilities must first be identified and assessed. In this dissertation, first, we present our contribution to designing a maturity model for vulnerability handling. Next, we analyze and compare common communication protocols and security practices regarding energy consumption. Finally, we describe our designed lightweight protocol for OTA updates targeting constrained IoT devices. IoT devices and back-end systems often use incompatible protocols that are unable to interoperate securely. This dissertation also includes our contribution to designing a secure protocol translator for IoT. This translation is performed inside a Trusted Execution Environment (TEE) with TLS interception. This dissertation also contains our contribution to key management and key distribution in IoT networks. In performing secure software updates, the IoT devices can be grouped since the updates target a large number of devices. Thus, prior to deploying updates, a group key needs to be established among group members. In this dissertation, we present our designed secure group key establishment scheme. Symmetric key cryptography can help to save IoT device resources at the cost of increased key management complexity. This trade-off can be improved by integrating IoT networks with cloud computing and Software Defined Networking (SDN).In this dissertation, we use SDN in cloud networks to provision symmetric keys efficiently and securely. These pieces together help software developers and maintainers identify vulnerabilities, provision secret keys, and perform lightweight secure OTA updates. Furthermore, they help devices and systems with incompatible protocols to be able to interoperate

    Data Trading and Monetization: Challenges and Open Research Directions

    Full text link
    Traditional data monetization approaches face challenges related to data protection and logistics. In response, digital data marketplaces have emerged as intermediaries simplifying data transactions. Despite the growing establishment and acceptance of digital data marketplaces, significant challenges hinder efficient data trading. As a result, few companies can derive tangible value from their data, leading to missed opportunities in understanding customers, pricing decisions, and fraud prevention. In this paper, we explore both technical and organizational challenges affecting data monetization. Moreover, we identify areas in need of further research, aiming to expand the boundaries of current knowledge by emphasizing where research is currently limited or lacking.Comment: Paper accepted by the International Conference on Future Networks and Distributed Systems (ICFNDS 2023

    FreeLib: A Self-Sustainable Peer-to-Peer Digital Library Framework for Evolving Communities

    Get PDF
    The need for efficient solutions to the problem of disseminating and sharing of data is growing. Digital libraries provide an efficient solution for disseminating and sharing large volumes of data to diverse sets of users. They enable the use of structured and well defined metadata to provide quality search services. Most of the digital libraries built so far follow a centralized model. The centralized model is an efficient model; however, it has some inherent problems. It is not suitable when content contribution is highly distributed over a very large number of participants. It also requires an organizational support to provide resources (hardware, software, and network bandwidth) and to manage processes for collecting, ingesting, curating, and maintaining the content. In this research, we develop an alternative digital library framework based on peer-to-peer. The framework utilizes resources contributed by participating nodes to achieve self-sustainability. A second key contribution of this research is a significant enhancement of search performance by utilizing the novel concept of community evolution. As demonstrated in this thesis, bringing users sharing similar interest together in a community significantly enhances the search performance. Evolving users into communities is based on a simple analysis of user access patterns in a completely distributed manner. This community evolution process is completely transparent to the user. In our framework, community membership of each node is continuously evolving. This allows users to move between communities as their interest shifts between topics, thus enhancing search performance for users all the time even when their interest changes. It also gives our framework great flexibility as it allows communities to dissolve and new communities to form and evolve over time to reflect the latest user interests. In addition to self-sustainability and performance enhancements, our framework has the potential of building extremely large collections although every node is only maintaining a small collection of digital objects

    Ground Systems Development Environment (GSDE) interface requirements analysis: Operations scenarios

    Get PDF
    This report is a preliminary assessment of the functional and data interface requirements to the link between the GSDE GS/SPF (Amdahl) and the Space Station Control Center (SSCC) and Space Station Training Facility (SSTF) Integration, Verification, and Test Environments (IVTE's). These interfaces will be involved in ground software development of both the control center and the simulation and training systems. Our understanding of the configuration management (CM) interface and the expected functional characteristics of the Amdahl-IVTE interface is described. A set of assumptions and questions that need to be considered and resolved in order to complete the interface functional and data requirements definitions are presented. A listing of information items defined to describe software configuration items in the GSDE CM system is included. It also includes listings of standard reports of CM information and of CM-related tools in the GSDE

    Requirements of the SALTY project

    Get PDF
    This document is the first external deliverable of the SALTY project (Self-Adaptive very Large disTributed sYstems), funded by the ANR under contract ANR-09-SEGI-012. It is the result of task 1.1 of the Work Package (WP) 1 : Requirements and Architecture. Its objective is to identify and collect requirements from use cases that are going to be developed in WP 4 (Use cases and Validation). Based on the study and classification of the use cases, requirements against the envisaged framework are then determined and organized in features. These features will aim at guide and control the advances in all work packages of the project. As a start, features are classified, briefly described and related scenarios in the defined use cases are pinpointed. In the following tasks and deliverables, these features will facilitate design by assigning priorities to them and defining success criteria at a finer grain as the project progresses. This report, as the first external document, has no dependency to any other external documents and serves as a reference to future external documents. As it has been built from the use cases studies that have been synthesized in two internal documents of the project, extracts from the two documents are made available as appendices (cf. appen- dices B and C)

    Acta Cybernetica : Volume 25. Number 2.

    Get PDF

    Federated Query Processing over Heterogeneous Data Sources in a Semantic Data Lake

    Get PDF
    Data provides the basis for emerging scientific and interdisciplinary data-centric applications with the potential of improving the quality of life for citizens. Big Data plays an important role in promoting both manufacturing and scientific development through industrial digitization and emerging interdisciplinary research. Open data initiatives have encouraged the publication of Big Data by exploiting the decentralized nature of the Web, allowing for the availability of heterogeneous data generated and maintained by autonomous data providers. Consequently, the growing volume of data consumed by different applications raise the need for effective data integration approaches able to process a large volume of data that is represented in different format, schema and model, which may also include sensitive data, e.g., financial transactions, medical procedures, or personal data. Data Lakes are composed of heterogeneous data sources in their original format, that reduce the overhead of materialized data integration. Query processing over Data Lakes require the semantic description of data collected from heterogeneous data sources. A Data Lake with such semantic annotations is referred to as a Semantic Data Lake. Transforming Big Data into actionable knowledge demands novel and scalable techniques for enabling not only Big Data ingestion and curation to the Semantic Data Lake, but also for efficient large-scale semantic data integration, exploration, and discovery. Federated query processing techniques utilize source descriptions to find relevant data sources and find efficient execution plan that minimize the total execution time and maximize the completeness of answers. Existing federated query processing engines employ a coarse-grained description model where the semantics encoded in data sources are ignored. Such descriptions may lead to the erroneous selection of data sources for a query and unnecessary retrieval of data, affecting thus the performance of query processing engine. In this thesis, we address the problem of federated query processing against heterogeneous data sources in a Semantic Data Lake. First, we tackle the challenge of knowledge representation and propose a novel source description model, RDF Molecule Templates, that describe knowledge available in a Semantic Data Lake. RDF Molecule Templates (RDF-MTs) describes data sources in terms of an abstract description of entities belonging to the same semantic concept. Then, we propose a technique for data source selection and query decomposition, the MULDER approach, and query planning and optimization techniques, Ontario, that exploit the characteristics of heterogeneous data sources described using RDF-MTs and provide a uniform access to heterogeneous data sources. We then address the challenge of enforcing privacy and access control requirements imposed by data providers. We introduce a privacy-aware federated query technique, BOUNCER, able to enforce privacy and access control regulations during query processing over data sources in a Semantic Data Lake. In particular, BOUNCER exploits RDF-MTs based source descriptions in order to express privacy and access control policies as well as their automatic enforcement during source selection, query decomposition, and planning. Furthermore, BOUNCER implements query decomposition and optimization techniques able to identify query plans over data sources that not only contain the relevant entities to answer a query, but also are regulated by policies that allow for accessing these relevant entities. Finally, we tackle the problem of interest based update propagation and co-evolution of data sources. We present a novel approach for interest-based RDF update propagation that consistently maintains a full or partial replication of large datasets and deal with co-evolution
    corecore