151 research outputs found

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue

    BDTS: Blockchain-based Data Trading System

    Full text link
    Trading data through blockchain platforms is hard to achieve \textit{fair exchange}. Reasons come from two folds: Firstly, guaranteeing fairness between sellers and consumers is a challenging task as the deception of any participating parties is risk-free. This leads to the second issue where judging the behavior of data executors (such as cloud service providers) among distrustful parties is impractical in the context of traditional trading protocols. To fill the gaps, in this paper, we present a \underline{b}lockchain-based \underline{d}ata \underline{t}rading \underline{s}ystem, named BDTS. BDTS implements a fair-exchange protocol in which benign behaviors can get rewarded while dishonest behaviors will be punished. Our scheme requires the seller to provide consumers with the correct encryption keys for proper execution and encourage a rational data executor to behave faithfully for maximum benefits from rewards. We analyze the strategies of consumers, sellers, and dealers in the trading game and point out that everyone should be honest about their interests so that the game will reach Nash equilibrium. Evaluations prove efficiency and practicability.Comment: ICICS 2023 (Best Paper Award

    FlexiChain 2.0: NodeChain Assisting Integrated Decentralized Vault for Effective Data Authentication and Device Integrity in Complex Cyber-Physical Systems

    Full text link
    Distributed Ledger Technology (DLT) has been introduced using the most common consensus algorithm either for an electronic cash system or a decentralized programmable assets platform which provides general services. Most established reliable networks are unsuitable for all applications such as smart cities applications, and, in particular, Internet of Things (IoT) and Cyber Physical Systems (CPS) applications. The purpose of this paper is to provide a suitable DLT for IoT and CPS that could satisfy their requirements. The proposed work has been designed based on the requirements of Cyber Physical Systems. FlexiChain is proposed as a layer zero network that could be formed from independent blockchains. Also, NodeChain has been introduced to be a distributed (Unique ID) UID aggregation vault to secure all nodes' UIDs. Moreover, NodeChain is proposed to serve mainly FlexiChain for all node security requirements. NodeChain targets the security and integrity of each node. Also, the linked UIDs create a chain of narration that keeps track not merely for assets but also for who authenticated the assets. The security results present a higher resistance against four types of attacks. Furthermore, the strength of the network is presented from the early stages compared to blockchain and central authority. FlexiChain technology has been introduced to be a layer zero network for all CPS decentralized applications taking into accounts their requirements. FlexiChain relies on lightweight processing mechanisms and creates other methods to increase security

    Plastic circular economy in the EU: Material Flow Analysis and Transition Analysis

    Get PDF
    Plastic is valued for its versatility, but concerns have been raised over the environmental impacts of plastic waste. A more in-depth investigation of the plastic system is still needed to understand current flows and factors to close the plastic cycle. This research applied a material flow analysis (MFA) and transition analysis (TA), using multilevel perspectives, to the plastic circular economy transition in the EU. The MFA covers over 400 categories of plastic-containing products with a detailed analysis of the final destination of waste. The TA identifies the interaction of barriers and drivers to use secondary plastics, with a focus on the regime level along the plastic value chain. The MFA results indicate the EU produced over 66  million tonnes (Mt) of plastic polymers/fibres and an estimated consumption for plastic products of 73 Mt in 2016. Plastic waste increases amounted to over 37 Mt, and a significant amount of plastic waste was not recovered back into plastics in the EU. The uncertainty analysis of MFA highlights important data quality issues that need to be addressed. To understand why using secondary plastics presents challenges, the TA mapped the factors across policies and standards, markets and business models, technology, and consumer preferences and behaviours that create a web of constraints and a web of drivers. TA results highlight that data-information-knowledge is the key gap as most of the aspects are cross-cutting. Different actors are involved in new business networks and play multiple roles in driving the co-evolutionary dynamic. The thesis concludes that significant data gaps need MFA-based knowledge to inform policies that address the barriers and the potential socio-technical changes that can reshape plastic flows. The cases playing out across the whole value chain and four different application areas provide insights that are potentially more widely applicable to the circular economy transition processes in Europe

    Autonomy, Efficiency, Privacy and Traceability in Blockchain-enabled IoT Data Marketplace

    Full text link
    Personal data generated from IoT devices is a new economic asset that individuals can trade to generate revenue on the emerging data marketplaces. Blockchain technology can disrupt the data marketplace and make trading more democratic, trustworthy, transparent and secure. Nevertheless, the adoption of blockchain to create an IoT data marketplace requires consideration of autonomy and efficiency, privacy, and traceability. Conventional centralized approaches are built around a trusted third party that conducts and controls all management operations such as managing contracts, pricing, billing, reputation mechanisms etc, raising concern that providers lose control over their data. To tackle this issue, an efficient, autonomous and fully-functional marketplace system is needed, with no trusted third party involved in operational tasks. Moreover, an inefficient allocation of buyers’ demands on battery-operated IoT devices poses a challenge for providers to serve multiple buyers’ demands simultaneously in real-time without disrupting their SLAs (service level agreements). Furthermore, a poor privacy decision to make personal data accessible to unknown or arbitrary buyers may have adverse consequences and privacy violations for providers. Lastly, a buyer could buy data from one marketplace and without the knowledge of the provider, resell bought data to users registered in other marketplaces. This may either lead to monetary loss or privacy violation for the provider. To address such issues, a data ownership traceability mechanism is essential that can track the change in ownership of data due to its trading within and across marketplace systems. However, data ownership traceability is hard because of ownership ambiguity, undisclosed reselling, and dispersal of ownership across multiple marketplaces. This thesis makes the following novel contributions. First, we propose an autonomous and efficient IoT data marketplace, MartChain, offering key mechanisms for a marketplace leveraging smart contracts to record agreement details, participant ratings, and data prices in blockchain without involving any mediator. Second, MartChain is underpinned by an Energy-aware Demand Selection and Allocation (EDSA) mechanism for optimally selecting and allocating buyers' demands on provider’s IoT devices while satisfying the battery, quality and allocation constraints. EDSA maximizes the revenue of the provider while meeting the buyers’ requirements and ensuring the completion of the selected demands without any interruptions. The proof-of-concept implementation on the Ethereum blockchain shows that our approach is viable and benefits the provider and buyer by creating an autonomous and efficient real-time data trading model. Next, we propose KYBChain, a Know-Your-Buyer in the privacy-aware decentralized IoT data marketplace that performs a multi-faceted assessment of various characteristics of buyers and evaluates their privacy rating. Privacy rating empowers providers to make privacy-aware informed decisions about data sharing. Quantitative analysis to evaluate the utility of privacy rating demonstrates that the use of privacy rating by the providers results in a decrease of data leakage risk and generated revenue, correlating with the classical risk-utility trade-off. Evaluation results of KYBChain on Ethereum reveal that the overheads in terms of gas consumption, throughput and latency introduced by our privacy rating mechanism compared to a marketplace that does not incorporate a privacy rating system are insignificant relative to its privacy gains. Finally, we propose TrailChain which generates a trusted trade trail for tracking the data ownership spanning multiple decentralized marketplaces. Our solution includes mechanisms for detecting any unauthorized data reselling to prevent privacy violations and a fair resell payment sharing scheme to distribute payment among data owners for authorized reselling. We performed qualitative and quantitative evaluations to demonstrate the effectiveness of TrailChain in tracking data ownership using four private Ethereum networks. Qualitative security analysis demonstrates that TrailChain is resilient against several malicious activities and security attacks. Simulations show that our method detects undisclosed reselling within the same marketplace and across different marketplaces. Besides, it also identifies whether the provider has authorized the reselling and fairly distributes the revenue among the data owners at marginal overhead

    Digital Copyright Protection: Focus on Some Relevant Solutions

    Get PDF
    Copyright protection of digital content is considered a relevant problem of the current Internet since content digitalization and high performance interconnection networks have greatly increased the possibilities to reproduce and distribute digital content. Digital Rights Management (DRM) systems try to prevent the inappropriate or illegal use of copyrighted digital content. They are promoted by the major global media players, but they are also perceived as proprietary solutions that give rise to classic problems of privacy and fair use. On the other hand, watermarking protocols have become a possible solution to the problem of copyright protection. They have evolved during the last decade, and interesting proposals have been designed. This paper first presents current trends concerning the most significant solutions to the problem of copyright protection based on DRM systems and then focuses on the most promising approaches in the field of watermarking protocols. In this regard, the examined protocols are discussed in order to individuate which of them can better represent the right trade-off between opposite goals, such as, for example, security and easy of use, so as to prove that it is possible to implement open solutions compatible with the current web context without resorting to proprietary architectures or impairing the protection of copyrighted digital content

    Vbswp-CeaH: Vigorous Buyer-Seller Watermarking Protocol without Trusted Certificate Authority for Copyright Protection in Cloud Environment through Additive Homomorphism

    No full text
    Cloud-based storage ensures the secure dissemination of media. Authentication and integrity are important aspects in the distribution of digital media. Encryption-based techniques shelter this media between the communicating parties which are involved in a transaction. The challenge is how to restrict the digital media which is illegally redistributed by the authorized users. However, the digital watermarking technique and encryption-based methods are also not sufficient enough to provide copyright protection. The watermarking protocol is used to provide intellectual property for the customer and the service provider. This research paper provides a vigorous buyer-seller watermarking protocol without trusted certificate authority for copyright protection in the cloud environment. This research work uses the cloud environment which enables the cloud as a service infrastructural provider for storing credentials such as public and private secret keys and the digital certificates of interacting parties. The scheme uses additive homomorphism encryption with an effective key exchange algorithm for exchanging digital media. This proposed approach addresses the problems of anonymity and copy deterrence and protects the digital rights of the buyer and seller; these most up-to-date issues are related to information security. Furthermore, the experiment results conclude that the proposed protocol is flexible and secure even in a non-secure communication channel. We have used performance measures such as PSNR, NCC and cost in time methods for checking the integrity of the proposed protocol. The conducted experiments show a stronger robustness and high imperceptibility for the watermark and watermarked images
    • 

    corecore