301 research outputs found

    Secure information sharing on Decentralized Social Networks.

    Get PDF
    Decentralized Social Networks (DSNs) are web-based platforms built on distributed systems (federations) composed of multiple providers (pods) that run the same social networking service. DSNs have been presented as a valid alternative to Online Social Networks (OSNs), replacing the centralized paradigm of OSNs with a decentralized distribution of the features o\u21b5ered by the social networking platform. Similarly to commercial OSNs, DSNs o\u21b5er to their subscribed users a number of distinctive features, such as the possibility to share resources with other subscribed users or the possibility to establish virtual relationships with other DSN users. On the other hand, each DSN user takes part in the service, choosing to store personal data on his/her own trusted provider inside the federation or to deploy his/her own provider on a private machine. This, thus, gives each DSN user direct control of his/hers data and prevents the social network provider from performing data mining analysis over these information. Unfortunately, the deployment of a personal DSN pod is not as simple as it sounds. Indeed, each pod\u2019s owner has to maintain the security, integrity, and reliability of all the data stored in that provider. Furthermore, given the amount of data produced each day in a social network service, it is reasonable to assume that the majority of users cannot a\u21b5ord the upkeep of an hardware capable of handling such amount of information. As a result, it has been shown that most of DSN users prefer to subscribe to an existing provider despite setting up a new one, bringing to an indirect centralization of data that leads DSNs to su\u21b5er of the same issues as centralized social network services. In order to overcome this issue in this thesis we have investigated the possibility for DSN providers to lean on modern cloud-based storage services so as to o\u21b5er a cloudbased information sharing service. This has required to deal with many challenges. As such, we have investigated the definition of cryptographic protocols enabling DSN users to securely store their resources in the public cloud, along with the definition of communication protocols ensuring that decryption keys are distributed only to authorized users, that is users that satisfy at least one of the access control policies specified by data owner according to Relationship-based access control model (RelBAC) [20, 34]. In addition, it has emerged that even DSN users have the same difficulties as OSN users in defining RelBAC rules that properly express their attitude towards their own privacy. Indeed, it is nowadays well accepted that the definition of access control policies is an error-prone task. Then, since misconfigured RelBAC policies may lead to harmful data release and may expose the privacy of others as well, we believe that DSN users should be assisted in the RelBAC policy definition process. At this purpose, we have designed a RelBAC policy recommendation system such that it can learn from DSN users their own attitude towards privacy, and exploits all the learned data to assist DSN users in the definition of RelBAC policies by suggesting customized privacy rules. Nevertheless, despite the presence of the above mentioned policy recommender, it is reasonable to assume that misconfigured RelBAC rules may appear in the system. However, rather than considering all misconfigured policies as leading to potentially harmful situations, we have considered that they might even lead to an exacerbated data restriction that brings to a loss of utility to DSN users. As an example, assuming that a low resolution and an high resolution version of the same picture are uploaded in the network, we believe that the low-res version should be granted to all those users who are granted to access the hi-res version, even though, due to a misconfiurated system, no policy explicitly authorizes them on the low-res picture. As such, we have designed a technique capable of exploiting all the existing data dependencies (i.e., any correlation between data) as a mean for increasing the system utility, that is, the number of queries that can be safely answered. Then, we have defined a query rewriting technique capable of extending defined access control policy authorizations by exploiting data dependencies, in order to authorize unauthorized but inferable data. In this thesis we present a complete description of the above mentioned proposals, along with the experimental results of the tests that have been carried out so as to verify the feasibility of the presented techniques

    Hypergraph Partitioning in the Cloud

    Get PDF
    The thesis investigates the partitioning and load balancing problem which has many applications in High Performance Computing (HPC). The application to be partitioned is described with a graph or hypergraph. The latter is of greater interest as hypergraphs, compared to graphs, have a more general structure and can be used to model more complex relationships between groups of objects such as non-symmetric dependencies. Optimal graph and hypergraph partitioning is known to be NP-Hard but good polynomial time heuristic algorithms have been proposed. In this thesis, we propose two multi-level hypergraph partitioning algorithms. The algorithms are based on rough set clustering techniques. The first algorithm, which is a serial algorithm, obtains high quality partitionings and improves the partitioning cut by up to 71\% compared to the state-of-the-art serial hypergraph partitioning algorithms. Furthermore, the capacity of serial algorithms is limited due to the rapid growth of problem sizes of distributed applications. Consequently, we also propose a parallel hypergraph partitioning algorithm. Considering the generality of the hypergraph model, designing a parallel algorithm is difficult and the available parallel hypergraph algorithms offer less scalability compared to their graph counterparts. The issue is twofold: the parallel algorithm and the complexity of the hypergraph structure. Our parallel algorithm provides a trade-off between global and local vertex clustering decisions. By employing novel techniques and approaches, our algorithm achieves better scalability than the state-of-the-art parallel hypergraph partitioner in the Zoltan tool on a set of benchmarks, especially ones with irregular structure. Furthermore, recent advances in cloud computing and the services they provide have led to a trend in moving HPC and large scale distributed applications into the cloud. Despite its advantages, some aspects of the cloud, such as limited network resources, present a challenge to running communication-intensive applications and make them non-scalable in the cloud. While hypergraph partitioning is proposed as a solution for decreasing the communication overhead within parallel distributed applications, it can also offer advantages for running these applications in the cloud. The partitioning is usually done as a pre-processing step before running the parallel application. As parallel hypergraph partitioning itself is a communication-intensive operation, running it in the cloud is hard and suffers from poor scalability. The thesis also investigates the scalability of parallel hypergraph partitioning algorithms in the cloud, the challenges they present, and proposes solutions to improve the cost/performance ratio for running the partitioning problem in the cloud. Our algorithms are implemented as a new hypergraph partitioning package within Zoltan. It is an open source Linux-based toolkit for parallel partitioning, load balancing and data-management designed at Sandia National Labs. The algorithms are known as FEHG and PFEHG algorithms

    Secure information sharing on Decentralized Social Networks.

    Get PDF
    Decentralized Social Networks (DSNs) are web-based platforms built on distributed systems (federations) composed of multiple providers (pods) that run the same social networking service. DSNs have been presented as a valid alternative to Online Social Networks (OSNs), replacing the centralized paradigm of OSNs with a decentralized distribution of the features o↵ered by the social networking platform. Similarly to commercial OSNs, DSNs o↵er to their subscribed users a number of distinctive features, such as the possibility to share resources with other subscribed users or the possibility to establish virtual relationships with other DSN users. On the other hand, each DSN user takes part in the service, choosing to store personal data on his/her own trusted provider inside the federation or to deploy his/her own provider on a private machine. This, thus, gives each DSN user direct control of his/hers data and prevents the social network provider from performing data mining analysis over these information. Unfortunately, the deployment of a personal DSN pod is not as simple as it sounds. Indeed, each pod’s owner has to maintain the security, integrity, and reliability of all the data stored in that provider. Furthermore, given the amount of data produced each day in a social network service, it is reasonable to assume that the majority of users cannot a↵ord the upkeep of an hardware capable of handling such amount of information. As a result, it has been shown that most of DSN users prefer to subscribe to an existing provider despite setting up a new one, bringing to an indirect centralization of data that leads DSNs to su↵er of the same issues as centralized social network services. In order to overcome this issue in this thesis we have investigated the possibility for DSN providers to lean on modern cloud-based storage services so as to o↵er a cloudbased information sharing service. This has required to deal with many challenges. As such, we have investigated the definition of cryptographic protocols enabling DSN users to securely store their resources in the public cloud, along with the definition of communication protocols ensuring that decryption keys are distributed only to authorized users, that is users that satisfy at least one of the access control policies specified by data owner according to Relationship-based access control model (RelBAC) [20, 34]. In addition, it has emerged that even DSN users have the same difficulties as OSN users in defining RelBAC rules that properly express their attitude towards their own privacy. Indeed, it is nowadays well accepted that the definition of access control policies is an error-prone task. Then, since misconfigured RelBAC policies may lead to harmful data release and may expose the privacy of others as well, we believe that DSN users should be assisted in the RelBAC policy definition process. At this purpose, we have designed a RelBAC policy recommendation system such that it can learn from DSN users their own attitude towards privacy, and exploits all the learned data to assist DSN users in the definition of RelBAC policies by suggesting customized privacy rules. Nevertheless, despite the presence of the above mentioned policy recommender, it is reasonable to assume that misconfigured RelBAC rules may appear in the system. However, rather than considering all misconfigured policies as leading to potentially harmful situations, we have considered that they might even lead to an exacerbated data restriction that brings to a loss of utility to DSN users. As an example, assuming that a low resolution and an high resolution version of the same picture are uploaded in the network, we believe that the low-res version should be granted to all those users who are granted to access the hi-res version, even though, due to a misconfiurated system, no policy explicitly authorizes them on the low-res picture. As such, we have designed a technique capable of exploiting all the existing data dependencies (i.e., any correlation between data) as a mean for increasing the system utility, that is, the number of queries that can be safely answered. Then, we have defined a query rewriting technique capable of extending defined access control policy authorizations by exploiting data dependencies, in order to authorize unauthorized but inferable data. In this thesis we present a complete description of the above mentioned proposals, along with the experimental results of the tests that have been carried out so as to verify the feasibility of the presented techniques

    Resource allocation for fog computing based on software-defined networks

    Get PDF
    With the emergence of cloud computing as a processing backbone for internet of thing (IoT), fog computing has been proposed as a solution for delay-sensitive applications. According to fog computing, this is done by placing computing servers near IoT. IoT networks are inherently very dynamic, and their topology and resources may be changed drastically in a short period. So, using the traditional networking paradigm to build their communication backbone, may lower network performance and higher network configuration convergence latency. So, it seems to be more beneficial to employ a software-defined network paradigm to implement their communication network. In software-defined networking (SDN), separating the network’s control and data forwarding plane makes it possible to manage the network in a centralized way. Managing a network using a centralized controller can make it more flexible and agile in response to any possible network topology and state changes. This paper presents a software-defined fog platform to host real-time applications in IoT. The effectiveness of the mechanism has been evaluated by conducting a series of simulations. The results of the simulations show that the proposed mechanism is able to find near to optimal solutions in a very lower execution time compared to the brute force method

    Rough Set-hypergraph-based Feature Selection Approach for Intrusion Detection Systems

    Get PDF
    Immense growth in network-based services had resulted in the upsurge of internet users, security threats and cyber-attacks. Intrusion detection systems (IDSs) have become an essential component of any network architecture, in order to secure an IT infrastructure from the malicious activities of the intruders. An efficient IDS should be able to detect, identify and track the malicious attempts made by the intruders. With many IDSs available in the literature, the most common challenge due to voluminous network traffic patterns is the curse of dimensionality. This scenario emphasizes the importance of feature selection algorithm, which can identify the relevant features and ignore the rest without any information loss. In this paper, a novel rough set κ-Helly property technique (RSKHT) feature selection algorithm had been proposed to identify the key features for network IDSs. Experiments carried using benchmark KDD cup 1999 dataset were found to be promising, when compared with the existing feature selection algorithms with respect to reduct size, classifier’s performance and time complexity. RSKHT was found to be computationally attractive and flexible for massive datasets

    Matching cloud services with TOSCA

    Get PDF
    The OASIS TOSCA specification aims at enhancing the portability of cloud-based applications by defining a language to describe and manage service orchestrations across heterogeneous clouds. A service template is defined as an orchestration of typed nodes, which can be instantiated by matching other service templates. In this thesis, after defining the notion of exact matching between TOSCA service templates and node types, we define three other types of matching (plug-in, flexible and white-box), each permitting to ignore larger sets of non-relevant syntactic differences when type-checking service templates with respect to node types. We also describe how service templates that plug-in, flexibly or white-box match node types can be suitably adapted so as to exactly match them

    Supporting Application Requirements in Cloud-based IoT Information Processing

    Get PDF
    IoT infrastructures can be seen as an interconnected network of sources of data, whose analysis and processing can be beneficial for our society. Since IoT devices are limited in storage and computation capabilities, relying on external cloud providers has recently been identified as a promising solution for storing and managing IoT data. Due to the heterogeneity of IoT data and applicative scenarios, the cloud service delivery should be driven by the requirements of the specific IoT applications. In this paper, we propose a novel approach for supporting application requirements (typically related to security, due to the inevitable concerns arising whenever data are stored and managed at external third parties) in cloud-based IoT data processing. Our solution allows a subject with an authority over an IoT infrastructure to formulate conditions that the provider must satisfy in service provisioning, and computes a SLA based on these conditions while accounting for possible dependencies among them. We also illustrate a CSP-based formulation of the problem of computing a SLA, which can be solved adopting off-the-shelves CSP solvers

    Reproducible geoscientific modelling with hypergraphs

    Get PDF
    Reproducing the construction of a geoscientific model is a hard task. It requires the availability of all required data and an exact description how the construction was performed. In practice data availability and the exactness of the description is often lacking. As part of this thesis I introduce a conceptual framework how geoscientific model constructions can be described as directed acyclic hypergraphs, how such recorded construction graphs can be used to reconstruct the model, and how repetitive constructions can be used to verify the reproducibility of a geoscientific model construction process. In addition I present a software prototype, implementing these concepts. The prototype is tested with three different case studies, including a geophysical measurement analysis, a subsurface model construction and the calculation of a hydrological balance model.:1. Introduction 1.1. Survey on Reproducibility and Automation for Geoscientific Model Construction 1.2. Motivating Example 1.3. Previous Work 1.4. Problem Description 1.5. Structure of this Thesis 1.6. Results Accomplished by this Thesis 2. Terms, Definitions and Requirements 2.1. Terms and Definitions 2.1.1. Geoscientific model 2.1.2. Reproducibility 2.1.3. Realisation 2.2. Requirements 3. Related Work 3.1. Overview 3.2. Geoscientific Data Storage Systems 3.2.1. PostGIS and Similar Systems 3.2.2. Geoscience in Space and Time (GST) 3.3. Geoscientific Modelling Software 3.3.1. gOcad 3.3.2. GemPy 3.4. Experimentation Management Software 3.4.1. DataLad 3.4.2. Data Version Control (DVC) 3.5. Reproducible Software Builds 3.6. Summarised Releated Work 4. Concept 4.1. Construction Hypergraphs 4.1.1. Reproducibility Based on Construction Hypergraphs 4.1.2. Equality definitions 4.1.3. Design Constraints 4.2. Data Handling 5. Design 5.1. Application Structure 5.1.1. Choice of Application Architecture for GeoHub 5.2. Extension Mechanisms 5.2.1. Overview 5.2.2. A Shared Library Based Extension System 5.2.3. Inter-Process Communication Based Extension System 5.2.4. An Extension System Based on a Scripting Language 5.2.5. An Extension System Based on a WebAssembly Interface 5.2.6. Comparison 5.3. Data Storage 5.3.1. Overview 5.3.2. Stored Data 5.3.3. Potential Solutions 5.3.4. Model Versioning 5.3.5. Transactional security 6. Implementation 6.1. General Application Structure 6.2. Data Storage 6.2.1. Database 6.2.2. User-provided Data-processing Extensions 6.3. Operation Executor 6.3.1. Construction Step Descriptions 6.3.2. Construction Step Scheduling 6.3.3. Construction Step Execution 7. Case Studies 7.1. Overview 7.2. Geophysical Model of the BHMZ block 7.2.1. Provided Data and Initial Situation 7.2.2. Construction Process Description 7.2.3. Reproducibility 7.2.4. Identified Problems and Construction Process Improvements 7.2.5. Recommendations 7.3. Three-Dimensional Subsurface Model of the Kolhberg Region 7.3.1. Provided Data and Initial Situation 7.3.2. Construction Process Description 7.3.3. Reproducibility 7.3.4. Identified Problems and Construction Process Improvements 7.3.5. Recommendations 7.4. Hydrologic Balance Model of a Saxonian Stream 7.4.1. Provided Data and Initial Situation 7.4.2. Construction Process Description 7.4.3. Reproducibility 7.4.4. Identified Problems and Construction Process Improvements 7.4.5. Recommendations 7.5. Lessons Learned 8. Conclusions 8.1. Summary 8.2. Outlook 8.2.1. Parametric Model Construction Process 8.2.2. Pull and Push Nodes 8.2.3. Parallelize Single Construction Steps 8.2.4. Provable Model Construction Process Attestation References Appendi
    • …
    corecore