1,522 research outputs found

    Mapping the Path to a Health Data Marketplace in Norway: An Exploratory Case Study

    Get PDF
    This Master's thesis explores the complex dynamics of health data in the digital age, focusing on its secure and efficient management and ethical considerations. It investigates the potential of implementing a Health Data Marketplace (HDM) in the Norwegian e-health sector, aiming to construct a seamless health data exchange platform. This study proposes the integration of an existing health data gateway, the Egde Health Gateway (EHG), with the HDM. The research offers an in-depth analysis of existing limitations in health data exchange systems in Norway. It addresses current research gaps in Data Marketplace, Business Models, Gateways, and the Norwegian e-health context. Guided by two central research questions, this thesis delves into identifying essential components required to successfully implement an HDM in Norway and how this marketplace could be established using an existing data platform. Significantly, the thesis underscores the pivotal role of primary stakeholders in the HDM - Platform Operators, Platform Users, and Legal Authorities. The exploration reveals that Platform Operators are vital influencers, fostering collaboration and innovation within the ecosystem, while Platform Users and Legal Authorities ensure the marketplace's innovative and compliance aspects. Additionally, this study identifies essential components for successfully integrating an HDM into an existing health data platform, including Data Standardization, Interoperability, Integration, Security, Trust, and Legal Frameworks, among others. The thesis marks a significant step towards realizing an HDM in the Norwegian e-health sector. It invites future research to broaden stakeholder perspectives, examine economic aspects of the HDM, and delve into ethical considerations and technological innovations. The findings from this exploration serve as a catalyst for leveraging health data effectively, securely, and ethically, contributing to improved healthcare outcomes, research, and innovation in Norway and beyond

    Mapping the Path to a Health Data Marketplace in Norway: An Exploratory Case Study

    Get PDF
    This Master's thesis explores the complex dynamics of health data in the digital age, focusing on its secure and efficient management and ethical considerations. It investigates the potential of implementing a Health Data Marketplace (HDM) in the Norwegian e-health sector, aiming to construct a seamless health data exchange platform. This study proposes the integration of an existing health data gateway, the Egde Health Gateway (EHG), with the HDM. The research offers an in-depth analysis of existing limitations in health data exchange systems in Norway. It addresses current research gaps in Data Marketplace, Business Models, Gateways, and the Norwegian e-health context. Guided by two central research questions, this thesis delves into identifying essential components required to successfully implement an HDM in Norway and how this marketplace could be established using an existing data platform. Significantly, the thesis underscores the pivotal role of primary stakeholders in the HDM - Platform Operators, Platform Users, and Legal Authorities. The exploration reveals that Platform Operators are vital influencers, fostering collaboration and innovation within the ecosystem, while Platform Users and Legal Authorities ensure the marketplace's innovative and compliance aspects. Additionally, this study identifies essential components for successfully integrating an HDM into an existing health data platform, including Data Standardization, Interoperability, Integration, Security, Trust, and Legal Frameworks, among others. The thesis marks a significant step towards realizing an HDM in the Norwegian e-health sector. It invites future research to broaden stakeholder perspectives, examine economic aspects of the HDM, and delve into ethical considerations and technological innovations. The findings from this exploration serve as a catalyst for leveraging health data effectively, securely, and ethically, contributing to improved healthcare outcomes, research, and innovation in Norway and beyon

    Contribución a la estimulación del uso de soluciones Cloud Computing: Diseño de un intermediador de servicios Cloud para fomentar el uso de ecosistemas distribuidos digitales confiables, interoperables y de acuerdo a la legalidad. Aplicación en entornos multi-cloud.

    Get PDF
    184 p.El objetivo del trabajo de investigación presentado en esta tesis es facilitar a los desarrolladores y operadores de aplicaciones desplegadas en múltiples Nubes el descubrimiento y la gestión de los diferentes servicios de Computación, soportando su reutilización y combinación, para generar una red de servicios interoperables, que cumplen con las leyes y cuyos acuerdos de nivel de servicio pueden ser evaluados de manera continua. Una de las contribuciones de esta tesis es el diseño y desarrollo de un bróker de servicios de Computación llamado ACSmI (Advanced Cloud Services meta-Intermediator). ACSmI permite evaluar el cumplimiento de los acuerdos de nivel de servicio incluyendo la legislación. ACSmI también proporciona una capa de abstracción intermedia para los servicios de Computación donde los desarrolladores pueden acceder fácilmente a un catálogo de servicios acreditados y compatibles con los requisitos no funcionales establecidos.Además, este trabajo de investigación propone la caracterización de las aplicaciones nativas multiNube y el concepto de "DevOps extendido" especialmente pensado para este tipo de aplicaciones. El concepto "DevOps extendido" pretende resolver algunos de los problemas actuales del diseño, desarrollo, implementación y adaptación de aplicaciones multiNube, proporcionando un enfoque DevOps novedoso y extendido para la adaptación de las prácticas actuales de DevOps al paradigma multiNube

    Successful Collaboration in Global Production Networks - fair, secured, connected

    Get PDF
    In today\u27s world, manufacturing companies face many challenges due to the uncertainty and complexity of environmental influences as well as increasing competitive pressures. The pandemic has clearly illustrated how volatile and fragile our supply chains have become. One way to overcome these chal-lenges together is to collaborate with other companies in the value network. Collaborations, i.e. successful cooperation with strategic partners and customers to achieve common goals, will continue to gain in importance. Instead of individual companies, entire value chains and networks will therefore compete with each other in the future. This will require a shift toward fast and seamless data exchange between the players in the value network. Advancing digitization and thus a generally increasingly networked world are increasingly supporting such collaborations, as the sharing and collaborative use of data is becom-ing much simpler. At the same time, the right handling of data will be decisive for competition. Digitization is moving from being a driver of change to an enabler of change. Innovative business models and the exploita-tion of the potential hidden in data will make it possible to realize reliable, flexible and, at the same time, resource-conserving value creation. The number of existing cloud-based collaboration platforms is growing steadily. Small and medium-sized enterprises in particular have to serve many different customer platforms at the same time, while they them-selves are still struggling with internal digitization challenges. Standardization initiatives for secure data rooms in the industry, such as GAIA-X, therefore hold great potential. In addition to these fundamental infrastructural issues, there are further challenges with regard to collaboration projects. Particular importance is attached to the competent handling of data protection and data security. There are often reservations that the disclosure of data and information will result in the loss of hard-earned expertise and com-petitive advantages that have been built up over time. At the same time, however, users from an engineering environment are only able to assess the risks of digital collaboration to a limited extent. In order to secure one\u27s own competitive position in the long term, digital competencies must therefore be built up and barriers to collaboration overcome. Success stories and clear recommendations for action can provide an important impetus for the implementation of successful collaboration projects, showing how collaborations can be approached in practice and what added value they generate. That is why we would like to provide manufacturing compa-nies with such guidance in the form of this action guide. The collaboration projects explained below and the best practices derived from them are intended to help companies find their own strategies on the path to more collaboration. We hope you enjoy reading this guide and are always available for questions and discussions

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue

    The Impact Of The Development Of ICT In Several Hungarian Economic Sectors

    Get PDF
    As the author could not find a reassuring mathematical and statistical method in the literature for studying the effect of information communication technology on enterprises, the author suggested a new research and analysis method that he also used to study the Hungarian economic sectors. The question of what factors have an effect on their net income is vital for enterprises. At first, the author studied some potential indicators related to economic sectors, then those indicators were compared to the net income of the surveyed enterprises. The resulting data showed that the growing penetration of electronic marketplaces contributed to the change of the net income of enterprises to the greatest extent. Furthermore, among all the potential indicators, it was the only indicator directly influencing the net income of enterprises. With the help of the compound indicator and the financial data of the studied economic sectors, the author made an attempt to find a connection between the development level of ICT and profitability. Profitability and productivity are influenced by a lot of other factors as well. As the effect of the other factors could not be measured, the results – shown in a coordinate system - are not full but informative. The highest increment of specific Gross Value Added was produced by the fields of ‘Manufacturing’, ‘Electricity, gas and water supply’, ‘Transport, storage and communication’ and ‘Financial intermediation’. With the exception of ‘Electricity, gas and water supply’, the other economic sectors belong to the group of underdeveloped branches (below 50 percent). On the other hand, ‘Construction’, ‘Health and social work’ and ‘Hotels and restaurants’ can be seen as laggards, so they got into the lower left part of the coordinate system. ‘Agriculture, hunting and forestry’ can also be classified as a laggard economic sector, but as the effect of the compound indicator on the increment of Gross Value Added was less significant, it can be found in the upper left part of the coordinate system. Drawing a trend line on the points, it can be made clear that it shows a positive gradient, that is, the higher the usage of ICT devices, the higher improvement can be detected in the specific Gross Value Added
    corecore