82 research outputs found

    Blockchain Application - Case Study on Hyperledger Fabric

    Get PDF
    Usalduse keskkonna saamiseks kasutatakse kolmandaid osapooli ja nende tarkvara platvorme. Plokiahela tehnoloogia ja nutikaid lepingud on üks võimalus, kuidas välistada kolmas osapool. Üks viimased turule tulnud vabatarkvara platvorme on Hyperledger Fabric - modulaarne süsteem, mis kasutab üldkasutavaid programmeerimskeeli nutikate lepingute keelena. See avardab platvormi kasutamist ettevõtte tarkvara loomisel. Võrdleme platvormi tavapäraste lahendustega ning uurime väljakutseid, mida pakub uus plokiahela põhine süsteem ja selle jaoks loodud nutika leping nimega chaincode. Selle töö käigus realiseeriti parkimiseks mõeldud rakendus, mille nutikas leping on kirjutatud Go programmeerimiskeeles.Töö käigus realiseerisime prototüübi, leidsime lahendused tehnilistele probleemidele, realiseerisime kasutusjuhud.To enable software platform to be used without a third trusted party, one of the possibilities is to use blockchain and smart contracts. One of the latest platform is open-source Hyperledger Fabric, a modular system that uses conventional programming languages for smart contracts. This opens up vast possibilities for using it product centric enterprise systems. In this paper we compare the platform to a conventional solution and study the challenges provided by the smart contract called chaincode. We implement a parking spot application for multisided market using smart contract and Go programming language. In the end we have a working prototype with solutions to technical problems, covering predetermined use cases

    Adaptive Big Data Pipeline

    Get PDF
    Over the past three decades, data has exponentially evolved from being a simple software by-product to one of the most important companies’ assets used to understand their customers and foresee trends. Deep learning has demonstrated that big volumes of clean data generally provide more flexibility and accuracy when modeling a phenomenon. However, handling ever-increasing data volumes entail new challenges: the lack of expertise to select the appropriate big data tools for the processing pipelines, as well as the speed at which engineers can take such pipelines into production reliably, leveraging the cloud. We introduce a system called Adaptive Big Data Pipelines: a platform to automate data pipelines creation. It provides an interface to capture the data sources, transformations, destinations and execution schedule. The system builds up the cloud infrastructure, schedules and fine-tunes the transformations, and creates the data lineage graph. This system has been tested on data sets of 50 gigabytes, processing them in just a few minutes without user intervention.ITESO, A. C

    Narratiivin variaatio: mediakertomusten visualisointi

    Get PDF
    The media plays an increasingly large role in shaping social reality, and even small shifts in its narrative content or tone can have widespread repercussions in the public’s perception of past and present phenomena. Being able to track changes in media coverage over time, particularly visually, could have many conceivable applications and offer the potential for aiding social change in journalism. This case study explores how data visualization could be used to examine differences in media narrative patterns over time and across publications. The findings indicate that while there are many existing means of visualizing patterns in such narrative data on a timeline axis, few if any address the aspect of co-occurrence of variables. Comparing co-occurrence chronologically, particularly when applied to word and topic choices in media coverage, can shed more light on currents in public opinion than simply counting the occurrence of terms independently. Furthermore, the findings suggest that visualizing such patterns in this case could be best accomplished using a form of set visualization, specifically a simplified vertical version of linear diagrams repeated horizontally across parallel timeline axes. This case study also outlines the methods, ethical considerations, and examples of employing such a visualization prototype using a sample dataset of full text news articles.Medialla on yhä suurempi rooli yhteiskunnan todellisuuden tuottamisessa, ja jopa pienet muutokset sisällössä voivat laajalti muokata yleisön käsitystä menneistä ja nykyisistä ilmiöistä. Mediasisältöjen muutosten seuranta, erityisesti visuaalisesti, soveltuisi moneen tarkoitukseen ja voisi edistää vastuullisen journalismin kehitystä ja käyttöä yhteiskunnassa. Tässä tapaustutkimuksessa selvitetään, miten tiedon visualisointia voitaisiin käyttää tutkimaan eroja mediakertomuksissa ajan myötä eri julkaisuissa. Tulokset osoittavat, että vaikka olemassa olevia keinoja vastaavan tiedon visualisointiin löytyy, yksikään ei tuo esille muuttujien samanaikaisuuden näkökulmaa. Samanaikaisuuden vertailu kronologisesti, erityisesti sana- ja aihevalintoihin sovellettuna mediasisällön osalta, voi paremmin valaista yleisen mielipiteen virtoja kuin yksittäisten sanavalintojen laskeminen. Lisäksi havainnot viittaavat siihen, että tällaisten mallien visualisointi voitaisiin parhaiten toteuttaa käyttämällä joukko-opin visualisointeja, erityisesti lineaaristen kaavioiden yksinkertaistettua vertikaalista versiota rinnakkaisilla aikajana-akseleilla. Tässä tapaustutkimuksessa esitetään myös menetelmät, eettiset näkökulmat ja esimerkit tällaisen visualisointiprototyypin tuotosta ja käytöstä uutisartikkelidataa hyödyntäen

    Big Data

    Get PDF
    Η εργασία στοχεύει στην ανάλυση της αγοράς των μεγάλων δεδομένων, Περιλαμβάνονται οι πάροχοι μαζί με κάποιες ενδιαφέρουσες περιπτώσεις χρήσης.Nowadays, term big data, draws a lot of attention, both for Business and person perspective. For decades, companies have been making business decisions through its Business Intelligence department, based on transactional data which were basically stored in relational databases. However, regulatory compliance, increased competition, and other pressures have created an insatiable need for companies to accumulate and analyze large, fast-growing quantities of data that was beyond the critical data

    Big Data Now, 2015 Edition

    Get PDF
    Now in its fifth year, O’Reilly’s annual Big Data Now report recaps the trends, tools, applications, and forecasts we’ve talked about over the past year. For 2015, we’ve included a collection of blog posts, authored by leading thinkers and experts in the field, that reflect a unique set of themes we’ve identified as gaining significant attention and traction. Our list of 2015 topics include: Data-driven cultures Data science Data pipelines Big data architecture and infrastructure The Internet of Things and real time Applications of big data Security, ethics, and governance Is your organization on the right track? Get a hold of this free report now and stay in tune with the latest significant developments in big data

    Customized Interfaces for Modern Storage Devices

    Get PDF
    In the past decade, we have seen two major evolutions on storage technologies: flash storage and non-volatile memory. These storage technologies are both vastly different in their properties and implementations than the disk-based storage devices that current soft- ware stacks and applications have been built for and optimized over several decades. The second major trend that the industry has been witnessing is new classes of applications that are moving away from the conventional ACID (SQL) database access to storage. The resulting new class of NoSQL and in-memory storage applications consume storage using entirely new application programmer interfaces than their predecessors. The most significant outcome given these trends is that there is a great mismatch in terms of both application access interfaces and implementations of storage stacks when consuming these new technologies. In this work, we study the unique, intrinsic properties of current and next-generation storage technologies and propose new interfaces that allow application developers to get the most out of these storage technologies without having to become storage experts them- selves. We first build a new type of NoSQL key-value (KV) store that is FTL-aware rather than flash optimized. Our novel FTL cooperative design for KV store proofed to simplify development and outperformed state of the art KV stores, while reducing write amplification. Next, to address the growing relevance of byte-addressable persistent memory, we build a new type of KV store that is customized and optimized for persistent memory. The resulting KV store illustrates how to program persistent effectively while exposing a simpler interface and performing better than more general solutions. As the final component of the thesis, we build a generic, native storage solution for byte-addressable persistent memory. This new solution provides the most generic interface to applications, allow- ing applications to store and manipulate arbitrarily structured data with strong durability and consistency properties. With this new solution, existing applications as well as new “green field” applications will get to experience native performance and interfaces that are customized for the next storage technology evolution

    Multi-tenant hybrid cloud architecture

    Get PDF
    This paper examines the challenges associated with the multi-tenant hybrid cloud architecture and describes how this architectural approach was applied in two software development projects. The motivation for using this architectural approach is to allow developing new features on top of monolithic legacy systems – that are still in production use – but without using legacy technologies. The architectural approach considers these legacy systems as master systems that can be extended with multi-tenant cloud-based add-on applications. In general, legacy systems are run in customer-operated environments, whereas add-on applications can be deployed to cloud platforms. It is thus imperative to have a means connectivity between these environments over the internet. The technology stack used within the scope of this thesis is limited to the offering of the .NET Core ecosystem and Microsoft Azure. In the first part of the thesis work, a literature review was carried out. The literature review focused on the challenges associated with the architectural approach, and as a result, a list of challenges was formed. This list was utilized in the software development projects of the second part of the thesis. It should be noted that there were very few high-quality papers available focusing exactly on the multi-tenant hybrid cloud architecture, so, in the end, source material for the review was searched separately for multi-tenant and for hybrid cloud design challenges. This factor is noted in the evaluation of the review. In the second part of the thesis work, the architectural approach was applied in two software development projects. Goals were set for the architectural approach: the add-on applications should be developed with modern technology stacks; their delivery should be automated; their subscription should be straightforward for customer organizations and they should leverage multi-tenant resource sharing. In the first project a data quality management tool was developed on top of a legacy dealership management system. Due to database connectivity challenges, confidentiality of customer data and authentication requirements, the implemented solution does not fully utilize the architectural approach, as having the add-on application hosted in the customer environment was the most reasonable solution. Despite this, the add-on application was developed with a modern technology stack and its delivery is automated. The subscription process does involve certain manual steps and, if the customer infrastructure changes over time, these steps must be repeated by the developers. This decreases the scalability of the overall delivery model. In the second project a PDA application was developed on top of a legacy vehicle maintenance tire hotel system. The final implementation fully utilizes the architectural approach. Support for multi-tenancy was implemented using ASP.NET Core Dependency Injection and Finbuckle.MultiTenancy-library. Azure Relay Hybrid Connection was used for hybrid cloud connectivity between the add-on application and the master system. The delivery model incorporates the same challenges regarding subscription and customer infrastructure changes as the delivery model of the data quality management tool. However, the manual steps associated with these challenges must be performed only once per customer – not once per customer per application. In addition, the delivery model could be improved to support customer self-service governance, enabling the delegation of any customer environment installations to the customers themselves. Even further, the customer environment installation could potentially cover an entire product family. As an example, instead of just providing access for the PDA application, the installation could provide access for all vehicle maintenance family add-on applications. This would make customer environment management easier and developing new add-on applications faster

    A pattern recognition system for the automation of learning organisation – Learning Organisation Information System (LOIS)

    Get PDF
    This thesis is about finding a solution to achieve the automation of the Learning Organisation Information System code, named LOIS, where large companies will be able to instigate a culture of team learning based on the analysis of the events that occur in their respective businesses. They will be able to categorise data and formulate measures that can be implemented to reinforce and adapt their business models to accommodate changes. Nevertheless, this can only work if the automation offers non-intrusiveness, ease of use and adaptability for the big companies. The main idea behind implementing LOIS is to provide a platform for employees to express their concerns and predict the number of leavers whereby Organization Management can formulate measures to retain those employees. The solution involves two algorithms; firstly K-means algorithm that is used to cluster data and secondly Time Series Prediction that is used in conjunction to make prediction on those clustered data. It explains how K-means algorithm runs through all the data until a ‘no points change’ cluster membership is reached and how Time Series Prediction is used to clustered data to predict by initially normalizing the set of data and then by fluctuating the number nodes (layers). The main idea of implementing Time Series Prediction into the system is to predict the number of employees that will potentially leave the organization over a certain period. An architectural framework has been incorporated within this thesis that has then been built based on a Case study that has been designed specifically to implement the framework whereby results are generated to be analysed, reviewed and formulate measures. The thesis explains in detail all the different components of the framework, the Process Flow, the Deployment Architecture while concluding with an Organisational Framework Process Flow. Furthermore, it explains and shows the different graphical user interface that the system has to offer to employees in order to help their day to day life within a company. The thesis concludes by comparing a huge amount of historical and actual data into the system to know if there are any improvement in the processes
    corecore