626 research outputs found

    A Fully Data On-chain Solution for Tracking Provenance of Handcrafted Jewellery

    Get PDF
    Plokiahela (blockchain) nutilepingute (smart contracts) mehhanisme on laialdaselt kasutatud mitmetes valdkondades, sealhulgas meditsiiniliste andmete haldamises, teemantite teekonna jĂ€lgimises ja paljudes teistes kasutusalades. Plokiahela kasutamine on oma lĂ€bipaistvuse tĂ”ttu usaldusvÀÀrne ning plokiahelasse salvestatud andmed on vĂ”ltsimise vastu resistentsed. Ülaltoodud eelduse pĂ”hjal on kĂ€esoleva töö eesmĂ€rgiks ehitada rakendus, mis kasutab plokiahelat kĂ€sitööehete tarneahela jĂ€lgimiseks. Toote tarneahela jĂ€lgimine hĂ”lmab andmete salvestamist igas tootmisetapis, mistĂ”ttu see tegevus vajab andmebaasi, mis suudaks salvestada keerukaid andmestruktuure kĂ”ikide ĂŒksikasjde jÀÀdvustamiseks. Seevastu enamik plokiahela platvormid suudavad andmeid salvestada ainult relatsioonilistes andmebaasides. Relatsioonilistes andmebaasides saab andmeid salvestada ainult vĂ”tmevÀÀrtuste abil ning nendes ei ole vĂ”imalik teha andmete koondamise operatsioone, mis omavad suurt vÀÀrtust Ă€riotsuste langetamisel. Hyperledger Fabric on ettevĂ”tetetele mĂ”eldud plokiahela raamistik, mida on vĂ”imalik laiendada nii, et relatsioonilise andmebaasi asemel on kasutusel mitterelatsiooniline (NoSQL) couchDB andmebaas, vĂ”imaldades seelĂ€bi keerukate pĂ€ringute tegemise. KĂ€esolevas töös uurime Hyperledger Fabric'i andmebaasi vĂ”imekust salvestada kĂ€sitööehete pĂ€ritolu plokiahelasse. Elulise nĂ€itena toome vĂ€lja firma nimega Soko, mis mĂŒĂŒb kĂ€sitöötooteid ning soovib tagada oma toodete tarneahela lĂ€bipaistvuse. Töö kokkuvĂ”ttes analĂŒĂŒsime saadud tulemusi ning vĂ”rdleme omaloodud lahendust varasemaga, kus kasutati relatsioonilist andmebaasi ja plokiahelat toodete pĂ€ritolu salvestamiseks.Blockchain and Smart Contract have been widely adopted in a number of business do-mains and some of the recent ones include medical records management, tracking of di-amond and many more. The reason for using blockchain, is because it enhances trust through transparency and also the data stored on the blockchain is resilient to tempering. It’s based on the above premise that this paper aims at building an application that uses blockchain to track the supply chain of handcraft jewellery. Tracking the supply chain of a product involves storing complex data at each and every stage of production and there-fore this may require databases that can store complex data structures in order to capture all the details of the data. However, most of the blockchain platforms can only store data using key-value databases. Using key-value type of databases, data can only be saved using a data key and it’s impossible to perform data operation such as data aggregation yet such data operations are of great importance when making business decisions. Hy-perledger fabric is an enterprise blockchain that can be extended from using a key-value database to using couchDB, a NoSQL database with the ability to support complex que-ries. In this paper we investigate the capability of Hyperledger fabric’s database by stor-ing the provenance data for the handcraft jewellery onto the blockchain (onchain). We present a case of Soko, a company that sells handcrafted products and wants to ensure transparency in the supply chain of its products. Finally, we conclude by discussing our findings and comparing our solution with the previous solution where they use both conventional databases and blockchain to store the provenance data

    A Brief Study of Open Source Graph Databases

    Full text link
    With the proliferation of large irregular sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query and compute on the topological structure of these relationships represented as set(s) of edges between set(s) of vertices. To store and process Facebook-scale datasets, they must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we explore a variety of graph analysis and storage platforms. We compare their capabil- ities, interfaces, and performance by implementing and computing a set of real-world graph algorithms on synthetic graphs with up to 256 million edges. In the spirit of full disclosure, several authors are affiliated with the development of STINGER.Comment: WSSSPE13, 4 Pages, 18 Pages with Appendix, 25 figure

    Big Data Management Challenges, Approaches, Tools and their limitations

    No full text
    International audienceBig Data is the buzzword everyone talks about. Independently of the application domain, today there is a consensus about the V's characterizing Big Data: Volume, Variety, and Velocity. By focusing on Data Management issues and past experiences in the area of databases systems, this chapter examines the main challenges involved in the three V's of Big Data. Then it reviews the main characteristics of existing solutions for addressing each of the V's (e.g., NoSQL, parallel RDBMS, stream data management systems and complex event processing systems). Finally, it provides a classification of different functions offered by NewSQL systems and discusses their benefits and limitations for processing Big Data

    Creating a Relational Distributed Object Store

    Full text link
    In and of itself, data storage has apparent business utility. But when we can convert data to information, the utility of stored data increases dramatically. It is the layering of relation atop the data mass that is the engine for such conversion. Frank relation amongst discrete objects sporadically ingested is rare, making the process of synthesizing such relation all the more challenging, but the challenge must be met if we are ever to see an equivalent business value for unstructured data as we already have with structured data. This paper describes a novel construct, referred to as a relational distributed object store (RDOS), that seeks to solve the twin problems of how to persistently and reliably store petabytes of unstructured data while simultaneously creating and persisting relations amongst billions of objects.Comment: 12 pages, 5 figure

    Using Blockchain to support Data & Service Monetization

    Get PDF
    Two required features of a data monetization platform are query and retrieval of the metadata of the resources to be monetized. Centralized platforms rely on the maturity of traditional NoSQL database systems to support these features. These databases, for example, MongoDB allows for very efficient query and retrieval of data it stores. However, centralized platforms come with a bag of security and privacy concerns, making them not the ideal approach for a data monetization platform. On the other hand, most existing decentralized platforms are only partially decentralized. In this research, I developed Cowry, a platform for publishing metadata describing available resources (data or services), discovery of published metadata including fast search and filtering. My main contribution is a fully decentralized architecture that combines blockchain and traditional distributed database to gain additional features such as efficient query and retrieval of metadata stored on the blockchain

    Big Data Security (Volume 3)

    Get PDF
    After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology

    Access control technologies for Big Data management systems: literature review and future trends

    Get PDF
    Abstract Data security and privacy issues are magnified by the volume, the variety, and the velocity of Big Data and by the lack, up to now, of a reference data model and related data manipulation languages. In this paper, we focus on one of the key data security services, that is, access control, by highlighting the differences with traditional data management systems and describing a set of requirements that any access control solution for Big Data platforms may fulfill. We then describe the state of the art and discuss open research issues
    • 

    corecore