4,931 research outputs found

    Application-agnostic Personal Storage for Linked Data

    Get PDF
    Personaalsete andmete ristkasutuse puudumine veebirakenduste vahel on viinud olukorrani, kus kasutajate identiteet ja andmed on hajutatud eri teenusepakkujate vahel. Sellest tulenevalt on suuremad teenusepakkujad, kel on rohkem teenuseid ja kasutajaid,\n\rväiksematega võrreldes eelisseisus kasutajate andmete pealt lisandväärtuse, sh analüütika, pakkumise seisukohast. Lisaks on sellisel andmete eraldamisel negatiivne mõju lõppkasutajatele, kellel on vaja sarnaseid andmeid korduvalt esitada või uuendada eri teenusepakkujate juures vaid selleks, et kasutada teenust maksimaalselt. Käesolevas töös kirjeldatakse personaalse andmeruumi disaini ja realisatsiooni, mis lihtsustab andmete jagamist rakenduste vahel. Lahenduses kasutatakse AppScale\n\rrakendusemootori identiteedi infrastruktuuri, millele lisatakse personaalse andmeruumi teenus, millele ligipääsu saab hallata kasutaja ise. Andmeruumi kasutatavus eri kasutuslugude jaoks tagatakse läbi linkandmete põhimõtete rakendamise.Recent advances in cloud-based applications and services have led to the continuous replacement of traditional desktop applications with corresponding SaaS solutions. These cloud applications are provided by different service providers, and typically manage identity and personal data, such as user’s contact details, of its users by its own means.\n\rAs a result, the identities and personal data of users have been spread over different applications and servers, each capturing a partial snapshot of user data at certain time moment. This, however, has made maintenance of personal data for service providers difficult and resource-consuming. Furthermore, such kind of data segregation has the overall negative effect on the user experience of end-users who need to repeatedly re-enter and maintain in parallel the same data to gain the maximum benefit out of their applications. Finally, from an integration point of view – sealing of user data has led to the adoption of point-to-point integration models between service providers, which limits the evolution of application ecosystems compared to the models with content aggregators and brokers.\n\rIn this thesis, we will develop an application-agnostic personal storage, which allows sharing user data among applications. This will be achieved by extending AppScale app store identity infrastructure with a personal data storage, which can be easily accessed by any application in the cloud and it will be under the control of a user. Usability of data is leveraged via adoption of linked data principles

    Data Spaces

    Get PDF
    This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical

    Orchestration of distributed ingestion and processing of IoT data for fog platforms

    Get PDF
    In recent years there has been an extraordinary growth of the Internet of Things (IoT) and its protocols. The increasing diffusion of electronic devices with identification, computing and communication capabilities is laying ground for the emergence of a highly distributed service and networking environment. The above mentioned situation implies that there is an increasing demand for advanced IoT data management and processing platforms. Such platforms require support for multiple protocols at the edge for extended connectivity with the objects, but also need to exhibit uniform internal data organization and advanced data processing capabilities to fulfill the demands of the application and services that consume IoT data. One of the initial approaches to address this demand is the integration between IoT and the Cloud computing paradigm. There are many benefits of integrating IoT with Cloud computing. The IoT generates massive amounts of data, and Cloud computing provides a pathway for that data to travel to its destination. But today’s Cloud computing models do not quite fit for the volume, variety, and velocity of data that the IoT generates. Among the new technologies emerging around the Internet of Things to provide a new whole scenario, the Fog Computing paradigm has become the most relevant. Fog computing was introduced a few years ago in response to challenges posed by many IoT applications, including requirements such as very low latency, real-time operation, large geo-distribution, and mobility. Also this low latency, geo-distributed and mobility environments are covered by the network architecture MEC (Mobile Edge Computing) that provides an IT service environment and Cloud-computing capabilities at the edge of the mobile network, within the Radio Access Network (RAN) and in close proximity to mobile subscribers. Fog computing addresses use cases with requirements far beyond Cloud-only solution capabilities. The interplay between Cloud and Fog computing is crucial for the evolution of the so-called IoT, but the reach and specification of such interplay is an open problem. This thesis aims to find the right techniques and design decisions to build a scalable distributed system for the IoT under the Fog Computing paradigm to ingest and process data. The final goal is to explore the trade-offs and challenges in the design of a solution from Edge to Cloud to address opportunities that current and future technologies will bring in an integrated way. This thesis describes an architectural approach that addresses some of the technical challenges behind the convergence between IoT, Cloud and Fog with special focus on bridging the gap between Cloud and Fog. To that end, new models and techniques are introduced in order to explore solutions for IoT environments. This thesis contributes to the architectural proposals for IoT ingestion and data processing by 1) proposing the characterization of a platform for hosting IoT workloads in the Cloud providing multi-tenant data stream processing capabilities, the interfaces over an advanced data-centric technology, including the building of a state-of-the-art infrastructure to evaluate the performance and to validate the proposed solution. 2) studying an architectural approach following the Fog paradigm that addresses some of the technical challenges found in the first contribution. The idea is to study an extension of the model that addresses some of the central challenges behind the converge of Fog and IoT. 3) Design a distributed and scalable platform to perform IoT operations in a moving data environment. The idea after study data processing in Cloud, and after study the convenience of the Fog paradigm to solve the IoT close to the Edge challenges, is to define the protocols, the interfaces and the data management to solve the ingestion and processing of data in a distributed and orchestrated manner for the Fog Computing paradigm for IoT in a moving data environment.En els últims anys hi ha hagut un gran creixement del Internet of Things (IoT) i els seus protocols. La creixent difusió de dispositius electrònics amb capacitats d'identificació, computació i comunicació esta establint les bases de l’aparició de serveis altament distribuïts i del seu entorn de xarxa. L’esmentada situació implica que hi ha una creixent demanda de plataformes de processament i gestió avançada de dades per IoT. Aquestes plataformes requereixen suport per a múltiples protocols al Edge per connectivitat amb el objectes, però també necessiten d’una organització de dades interna i capacitats avançades de processament de dades per satisfer les demandes de les aplicacions i els serveis que consumeixen dades IoT. Una de les aproximacions inicials per abordar aquesta demanda és la integració entre IoT i el paradigma del Cloud computing. Hi ha molts avantatges d'integrar IoT amb el Cloud. IoT genera quantitats massives de dades i el Cloud proporciona una via perquè aquestes dades viatgin a la seva destinació. Però els models actuals del Cloud no s'ajusten del tot al volum, varietat i velocitat de les dades que genera l'IoT. Entre les noves tecnologies que sorgeixen al voltant del IoT per proporcionar un escenari nou, el paradigma del Fog Computing s'ha convertit en la més rellevant. Fog Computing es va introduir fa uns anys com a resposta als desafiaments que plantegen moltes aplicacions IoT, incloent requisits com baixa latència, operacions en temps real, distribució geogràfica extensa i mobilitat. També aquest entorn està cobert per l'arquitectura de xarxa MEC (Mobile Edge Computing) que proporciona serveis de TI i capacitats Cloud al edge per la xarxa mòbil dins la Radio Access Network (RAN) i a prop dels subscriptors mòbils. El Fog aborda casos d?us amb requisits que van més enllà de les capacitats de solucions només Cloud. La interacció entre Cloud i Fog és crucial per a l'evolució de l'anomenat IoT, però l'abast i especificació d'aquesta interacció és un problema obert. Aquesta tesi té com objectiu trobar les decisions de disseny i les tècniques adequades per construir un sistema distribuït escalable per IoT sota el paradigma del Fog Computing per a ingerir i processar dades. L'objectiu final és explorar els avantatges/desavantatges i els desafiaments en el disseny d'una solució des del Edge al Cloud per abordar les oportunitats que les tecnologies actuals i futures portaran d'una manera integrada. Aquesta tesi descriu un enfocament arquitectònic que aborda alguns dels reptes tècnics que hi ha darrere de la convergència entre IoT, Cloud i Fog amb especial atenció a reduir la bretxa entre el Cloud i el Fog. Amb aquesta finalitat, s'introdueixen nous models i tècniques per explorar solucions per entorns IoT. Aquesta tesi contribueix a les propostes arquitectòniques per a la ingesta i el processament de dades IoT mitjançant 1) proposant la caracterització d'una plataforma per a l'allotjament de workloads IoT en el Cloud que proporcioni capacitats de processament de flux de dades multi-tenant, les interfícies a través d'una tecnologia centrada en dades incloent la construcció d'una infraestructura avançada per avaluar el rendiment i validar la solució proposada. 2) estudiar un enfocament arquitectònic seguint el paradigma Fog que aborda alguns dels reptes tècnics que es troben en la primera contribució. La idea és estudiar una extensió del model que abordi alguns dels reptes centrals que hi ha darrere de la convergència de Fog i IoT. 3) Dissenyar una plataforma distribuïda i escalable per a realitzar operacions IoT en un entorn de dades en moviment. La idea després d'estudiar el processament de dades a Cloud, i després d'estudiar la conveniència del paradigma Fog per resoldre el IoT prop dels desafiaments Edge, és definir els protocols, les interfícies i la gestió de dades per resoldre la ingestió i processament de dades en un distribuït i orquestrat per al paradigma Fog Computing per a l'IoT en un entorn de dades en moviment

    “Envisioning Digital Sanctuaries”: An Exploration of Virtual Collectives for Nurturing Professional Development of Women in Technical Domains

    Get PDF
    Work and learning are essential facets of our existence, yet sociocultural barriers have historically limited access and opportunity for women in multiple contexts, including their professional pursuits. Such sociocultural barriers are particularly pronounced in technical domains and have relegated minoritized voices to the margins. As a result of these barriers, those affected have suffered strife, turmoil, and subjugation. Hence, it is important to investigate how women can subvert such structural limitations and find channels through which they can seek support and guidance to navigate their careers. With the proliferation of modern communication infrastructure, virtual forums of conversation such as Reddit have emerged as key spaces that allow knowledge-sharing, provide opportunities for mobilizing collective action, and constitute sanctuaries of support and companionship. Yet, recent scholarship points to the negative ramifications of such channels in perpetuating social prejudice, directed particularly at members from historically underrepresented communities. Using a novel comparative muti-method, multi-level empirical approach comprising content analysis, social network analysis, and psycholinguistic analysis, I explore the way in which virtual forums engender community and foster avenues for everyday resilience and collective care through the analysis of 400,267 conversational traces collected from three subreddits (r/cscareerquestions, r/girlsgonewired & r/careerwoman). Blending the empirical analysis with a novel theoretical apparatus that integrates insights from social constructivist frameworks, feminist data studies, computer-supported collaborative work, and computer-mediated communication, I highlight how gender, care, and community building intertwine and collectively impact the emergent conversational habits of these online enclaves. Key results indicate six content themes ranging from discussions on knowledge advancement to scintillating ethical probes regarding disparities manifesting in the technical workplace. Further, psycholinguistic and network insights reveal four pivotal roles that support and enrich the communities in different ways. Taken together, these insights help to postulate an emergent spectrum of relationality ranging from a more agentic to a more communal pattern of affinity building. Network insights also yield valuable inferences regarding the role of automated agents in community dynamics across the forums. A discussion is presented regarding the emergent routines of care, collective empowerment, empathy-building tactics, community sustenance initiatives, and ethical perspectives in relation to the involvement of automated agents. This dissertation contributes to the theory and practice of how virtual collectives can be designed and sustained to offer spaces for enrichment, empowerment, and advocacy, focusing on the professional development of historically underrepresented voices such as women

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Dwarna : a blockchain solution for dynamic consent in biobanking

    Get PDF
    Dynamic consent aims to empower research partners and facilitate active participation in the research process. Used within the context of biobanking, it gives individuals access to information and control to determine how and where their biospecimens and data should be used. We present Dwarna—a web portal for ‘dynamic consent’ that acts as a hub connecting the different stakeholders of the Malta Biobank: biobank managers, researchers, research partners, and the general public. The portal stores research partners’ consent in a blockchain to create an immutable audit trail of research partners’ consent changes. Dwarna’s structure also presents a solution to the European Union’s General Data Protection Regulation’s right to erasure—a right that is seemingly incompatible with the blockchain model. Dwarna’s transparent structure increases trustworthiness in the biobanking process by giving research partners more control over which research studies they participate in, by facilitating the withdrawal of consent and by making it possible to request that the biospecimen and associated data are destroyed.peer-reviewe

    Data Assets: Tokenization and Valuation

    Get PDF
    Your Data (new gold, new oil) is hugely valuable (est. $13T globally) but not a 'balance-sheet' asset. Tokenization- used by banks for payments and settlement- lets you manage, value, and monetize your data. Data is the ultimate commodity industry. This position paper outlines our vision and a general framework for tokenizing data, managing data assets and data liquidity to allow individuals and organizations in the public and private sectors to gain the economic value of data, while facilitating its responsible and ethical use. We will examine the challenges associated with developing and securing a data economy, as well as the potential applications and opportunities of the decentralised data-tokenized economy. We will also discuss the ethical considerations to promote the responsible exchange and use of data to fuel innovation and progress

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives

    Muistikeskeisen radioverkon vaikutus tietopääsyjen suoritusnopeuteen

    Get PDF
    Future 5G-based mobile networks will be largely defined by virtualized network functions (VNF). The related computing is being moved to cloud where a set of servers is provided to run all the software components of the VNFs. Such software component can be run on any server in the mobile network cloud infrastructure. The servers conventionally communicate via TCP/IP -network. To realize planned low-latency use cases in 5G, some servers are placed to data centers near the end users (edge clouds). Many of these use cases involve data accesses from one VNF to another, or to other network elements. The accesses are desired to take as little time as possible to stay within the stringent latency requirements of the new use cases. As a possible approach for reaching this, a novel memory-centric platform was studied. The main ideas of the memory-centric platform are to collapse the hierarchy between volatile and persistent memory by utilizing non-volatile memory (NVM) and use memory-semantic communication between computer components. In this work, a surrogate memory-centric platform was set up as a storage for VNFs and the latency of data accesses from VNF application was measured in different experiments. Measurements against a current platform showed that memory-centric platform was significantly faster to access than the current, TCP/IP using platform. Measurements for accessing RAM with different memory bandwidths within the memory-centric platform showed that the order of latency was roughly independent of the available memory bandwidth. These results mean that memory-centric platform is a promising alternative to be used as a storage system for edge clouds. However, more research is needed to study how other service qualities, such as low latency variation, are fulfilled in memory-centric platform in a VNF environment.Tulevaisuuden 5G:hen perustuvissa mobiiliverkoissa verkkolaitteisto on pääosin virtualisoitu. Tällaisen verkon virtuaaliverkkolaite (VNF) koostuu ohjelmistokomponenteista, joita ajetaan tarkoitukseen määrätyiltä mobiiliverkon pilven palvelimilta. Ohjelmistokomponentti voi olla ajossa millä vain mobiiliverkon näistä pilvi-infrastruktuurin palvelimista. Palvelimet on tavallisesti yhdistetty TCP/IP-verkolla. Jotta suunnitellut alhaisen viiveen käyttötapaukset voisivat toteutua 5G-verkoissa, pilvipalvelimia on sijoitettu niin kutsuttuihin reunadatakeskuksiin lähelle loppukäyttäjiä. Monet näistä käyttötapauksista sisältävät tietopääsyjä virtuaaliverkkolaitteesta toisiin tai muihin verkkoelementteihin. Tietopääsyviiveen halutaan olevan mahdollisimman pieni, jotta käyttötapausten tiukoissa viiverajoissa pysytään. Mahdollisena lähestymistapana tietopääsyviiveen minimoimiseen tutkittiin muistikeskeistä laitteistoalustaa. Tämän laitteistoalustan pääperiaatteita on korvata nykyiset lyhytkestoiset ja pysyvät muistit haihtumattomalla muistilla sekä kommunikoida muistisemanttisilla viesteillä tietokonekomponenttien kesken. Tässä työssä muistikeskeisyyttä hyödyntävää sijaislaitteistoa käytettiin VNF-datan varastona ja ohjelmistokomponenttien tietopääsyviivettä sinne mitattiin erilaisilla kokeilla. Kokeet osoittivat nykyisen, TCP/IP-pohjaisen alustan huomattavasti muistikeskeistä alustaa hitaammaksi. Toiseksi, kokeet osoittivat tietopääsyviiveiden olevan saman suuruisia muistikeskeisen alustan sisällä, riippumatta saatavilla olevasta muistikaistasta. Tulokset merkitsevät, että muistikeskeinen alusta on lupaava vaihtoehto reunadatakeskuksen tietovarastojärjestelmäksi. Lisää tutkimusta alustasta kuitenkin tarvitaan, jotta muiden palvelun laatukriteerien, kuten matalan viivevaihtelun, toteutumisesta saadaan tietoa
    corecore