646,774 research outputs found

    An approach to building a secure and persistent distributed object management system

    Full text link
    The Common Object Request Broker Architecture (CORBA) proposed by the Object Management Group (OMG) is a widely accepted standard to provide a system level framework in design and implementation of distributed objects. The core of the Object Management Architecture (OMA) is an Object Request Broker (ORB), which provides transparency of object location, activation, and communications. However, the specification provided by the OMG is not sufficient. For instance, there are no security specifications when handling object requests through the ORBs. The lack of such a security service prevents the use of CORBA from handling sensitive data such as personal and corporate financial information; In view of the above, this thesis identifies, explores, and provides an approach to handling secure objects in a distributed environment along with a persistent object service using the CORBA specification. The research specifically involves the design and implementation of a secured distributed object service. This object service requires a persistent service and object storage for storing and retrieving security specific information. To provide a secure distributed object environment, a secure object service using the specifications provided by the OMG has been designed and implemented. In addition, to preserve the persistence of secure information, an object service has been implemented to provide a persistent data store; The secure object service can provide a framework for handling distributed object in applications requiring security clearance such as distributed banking, online stock tradings, internet shopping, geographic and medical information systems

    Towards the Temporal Streaming of Graph Data on Distributed Ledgers

    Get PDF
    We present our work-in-progress on handling temporal RDF graph data using the Ethereum distributed ledger. The motivation for this work are scenarios where multiple distributed consumers of streamed data may need or wish to verify that data has not been tampered with since it was generated ā€“ for example, if the data describes something which can be or has been sold, such as domestically-generated electricity. We describe a system in which temporal annotations, and information suitable to validate a given dataset, are stored on a distributed ledger, alongside the results of fixed SPARQL queries executed at the time of data storage. The model adopted implements a graph-based form of temporal RDF, in which time intervals are represented by named graphs corresponding to ledger entries. We conclude by discussing evaluation, what remains to be implemented, and future directions

    RoBuSt: A Crash-Failure-Resistant Distributed Storage System

    Full text link
    In this work we present the first distributed storage system that is provably robust against crash failures issued by an adaptive adversary, i.e., for each batch of requests the adversary can decide based on the entire system state which servers will be unavailable for that batch of requests. Despite up to Ī³n1/logā”logā”n\gamma n^{1/\log\log n} crashed servers, with Ī³>0\gamma>0 constant and nn denoting the number of servers, our system can correctly process any batch of lookup and write requests (with at most a polylogarithmic number of requests issued at each non-crashed server) in at most a polylogarithmic number of communication rounds, with at most polylogarithmic time and work at each server and only a logarithmic storage overhead. Our system is based on previous work by Eikel and Scheideler (SPAA 2013), who presented IRIS, a distributed information system that is provably robust against the same kind of crash failures. However, IRIS is only able to serve lookup requests. Handling both lookup and write requests has turned out to require major changes in the design of IRIS.Comment: Revised full versio

    Distributed human computation framework for linked data co-reference resolution

    No full text
    Distributed Human Computation (DHC) is a technique used to solve computational problems by incorporating the collaborative effort of a large number of humans. It is also a solution to AI-complete problems such as natural language processing. The Semantic Web with its root in AI is envisioned to be a decentralised world-wide information space for sharing machine-readable data with minimal integration costs. There are many research problems in the Semantic Web that are considered as AI-complete problems. An example is co-reference resolution, which involves determining whether different URIs refer to the same entity. This is considered to be a significant hurdle to overcome in the realisation of large-scale Semantic Web applications. In this paper, we propose a framework for building a DHC system on top of the Linked Data Cloud to solve various computational problems. To demonstrate the concept, we are focusing on handling the co-reference resolution in the Semantic Web when integrating distributed datasets. The traditional way to solve this problem is to design machine-learning algorithms. However, they are often computationally expensive, error-prone and do not scale. We designed a DHC system named iamResearcher, which solves the scientific publication author identity co-reference problem when integrating distributed bibliographic datasets. In our system, we aggregated 6 million bibliographic data from various publication repositories. Users can sign up to the system to audit and align their own publications, thus solving the co-reference problem in a distributed manner. The aggregated results are published to the Linked Data Cloud

    A Globally Distributed System for Job, Data, and Information Handling for High Energy Physics

    Full text link

    Hadoop-based File Monitoring System for Processing Image Data

    Get PDF
    This paper presents a file monitoring system based on the Hadoop framework, specifically designed for image data processing. The system comprises a Hadoop cluster and a client, where the Hadoop cluster includes various modules such as a name node module, a name node agent module, data node modules, a matching module, and a response algorithm module. The name node agent module acts as an intermediary between the client and the name node module, forwarding function information and acquiring configuration information. The system provides comprehensive monitoring capabilities for the distributed file system, enabling real-time handling of requests and messages

    On Minimizing Data-read and Download for Storage-Node Recovery

    Full text link
    We consider the problem of efficient recovery of the data stored in any individual node of a distributed storage system, from the rest of the nodes. Applications include handling failures and degraded reads. We measure efficiency in terms of the amount of data-read and the download required. To minimize the download, we focus on the minimum bandwidth setting of the 'regenerating codes' model for distributed storage. Under this model, the system has a total of n nodes, and the data stored in any node must be (efficiently) recoverable from any d of the other (n-1) nodes. Lower bounds on the two metrics under this model were derived previously; it has also been shown that these bounds are achievable for the amount of data-read and download when d=n-1, and for the amount of download alone when d<n-1. In this paper, we complete this picture by proving the converse result, that when d<n-1, these lower bounds are strictly loose with respect to the amount of read required. The proof is information-theoretic, and hence applies to non-linear codes as well. We also show that under two (practical) relaxations of the problem setting, these lower bounds can be met for both read and download simultaneously.Comment: IEEE Communications Letter

    Web-based Information Systems: Data Monitoring, Analysis and Reporting for Measurements and Simulations

    Get PDF
    This paper describes the concept, implementation and application of the Web-based Information System ā€˜Turtleā€™ for data monitoring, analysis, reporting and management in engineering projects. The system uses a generalised object-oriented approach for information modelling of physical state variables from measurements and simulations by sets of tensor objects and is implemented platform-independently as a Web application. This leads to a more flexible handling of measurement and simulation information in distributed and interdisciplinary engineering projects based on the concept of information sharing. The potential and advantages of Web-based information systems like ā€˜Turtleā€™ are described for one selected application example: a measurement programme dealing with the physical limnology of Lake Constance

    The Data Breach Dilemma: Proactive Solutions for Protecting Consumersā€™ Personal Information

    Get PDF
    Data breaches are an increasingly common part of consumersā€™ lives. No institution is immune to the possibility of an attack. Each breach inevitably risks the release of consumersā€™ personally identifiable information and the strong possibility of identity theft. Unfortunately, current solutions for handling these incidents are woefully inadequate. Private litigation like consumer class actions and shareholder lawsuits each face substantive legal and procedural barriers. States have their own data security and breach notification laws, but there is currently no unifying piece of legislation or strong enforcement mechanism. This Note argues that proactive solutions are required. First, a national data security lawā€”setting minimum data security standards, regulating the use and storage of personal information, and expanding the enforcement role of the Federal Trade Commissionā€”is imperative to protect consumersā€™ data. Second, a proactive solution requires reconsidering how to minimize the problem by going to its source: the collection of personally identifiable information in the first place. This Note suggests regulating companiesā€™ collection of Social Security numbers, and, eventually, using a system based on distributed ledger technology to replace the ubiquity of Social Security numbers
    • ā€¦
    corecore