285 research outputs found
The Public Performance Of Sanctions In Insolvency Cases: The Dark, Humiliating, And Ridiculous Side Of The Law Of Debt In The Italian Experience. A Historical Overview Of Shaming Practices
This study provides a diachronic comparative overview of how the law of debt has been applied by certain institutions in Italy. Specifically, it offers historical and comparative insights into the public performance of sanctions for insolvency through shaming and customary practices in Roman Imperial Law, in the Middle Ages, and in later periods.
The first part of the essay focuses on the Roman bonorum cessio culo nudo super lapidem and on the medieval customary institution called pietra della vergogna (stone of shame), which originates from the Roman model.
The second part of the essay analyzes the social function of the zecca and the pittima Veneziana during the Republic of Venice, and of the practice of lu soldate a castighe (no translation is possible).
The author uses a functionalist approach to apply some arguments and concepts from the current context to this historical analysis of ancient institutions that we would now consider ridiculous.
The article shows that the customary norms that play a crucial regulatory role in online interactions today can also be applied to the public square in the past. One of these tools is shaming. As is the case in contemporary online settings, in the public square in historic periods, shaming practices were used to enforce the rules of civility in a given community. Such practices can be seen as virtuous when they are intended for use as a tool to pursue positive change in forces entrenched in the culture, and thus to address social wrongs considered outside the reach of the law, or to address human rights abuses
Trustworthy Decentralized Last Mile Delivery Framework Using Blockchain
The fierce competition and rapidly growing eCommerce market are painful headaches for logistics companies. In 2021, Canada Post’s parcel volume peaked at 361 million units with a minimum charge of $10 per each. The Last-Mile Delivery (LMD) is the final leg of the supply chain that ends with the package at the customer’s doorstep. LMD involves moving small shipments to geographically dispersed locations with high expectations on service levels and precise time windows. Therefore, it is the most complex and costly logistics process, accounting for more than 50% of the overall supply chain cost. Innovations like Crowdshipping, such as Uber and Amazon Flex, help overcome this inefficiency and provide an outstanding delivery experience by enabling freelancers willing to deliver packages if they are around. However, apartfrom the centralized nature of the Crowdshipping platforms, retailers pay a fee for outsourcing the delivery process, which is rising. Besides, they lack transparency, and most of them, if not all, are platform monopolies in the making.
New technologies such as blockchain recently introduced an opportunity to improve logistics and LMD operations. Several papers in the literature suggested employing blockchain and other cryptographic techniques for parcel delivery. Hence,this thesis presents a blockchain-based free-intermediaries crowd-logistics model and investigates the challenges that could harbor adopting this solution, such as user trust, data safety, security of transactions, and tracking service quality. Our framework combines a security assessment that examines the possible vulnerabilities of the proposed design and suggestions for mitigation and protection. Besides, it encourages couriers to act honestly by using a decentralized reputation model for couriers’ ratings based on their past behavior. A security analysis of our proposed system hasbeen provided, and the complete code of the smart contract has been publicly made available on GitHub
Hardening Tor Hidden Services
Tor is an overlay anonymization network that provides anonymity for clients surfing the web but also allows hosting anonymous services called hidden services. These enable whistleblowers and political activists to express their opinion and resist censorship. Administrating a hidden service is not trivial and requires extensive knowledge because Tor uses a comprehensive protocol and relies on volunteers. Meanwhile, attackers can spend significant resources to decloak them. This thesis aims to improve the security of hidden services by providing practical guidelines and a theoretical architecture. First, vulnerabilities specific to hidden services are analyzed by conducting an academic literature review. To model realistic real-world attackers, court documents are analyzed to determine their procedures. Both literature reviews classify the identified vulnerabilities into general categories.
Afterward, a risk assessment process is introduced, and existing risks for hidden services and their operators are determined. The main contributions of this thesis are practical guidelines for hidden service operators and a theoretical architecture. The former provides operators with a good overview of practices to mitigate attacks. The latter is a comprehensive infrastructure that significantly increases the security of hidden services and alleviates problems in the Tor protocol. Afterward, limitations and the transfer into practice are analyzed. Finally, future research possibilities are determined
Blockchain-Coordinated Frameworks for Scalable and Secure Supply Chain Networks
Supply chains have progressed through time from being limited to a few regional traders to becoming complicated business networks. As a result, supply chain management systems now rely significantly on the digital revolution for the privacy and security of data. Due to key qualities of blockchain, such as transparency, immutability and decentralization, it has recently gained a lot of interest as a way to solve security, privacy and scalability problems in supply chains. However conventional blockchains are not appropriate for supply chain ecosystems because they are computationally costly, have a limited potential to scale and fail to provide trust. Consequently, due to limitations with a lack of trust and coordination, supply chains tend to fail to foster trust among the network’s participants. Assuring data privacy in a supply chain ecosystem is another challenge. If information is being shared with a large number of participants without establishing data privacy, access control risks arise in the network. Protecting data privacy is a concern when sending corporate data, including locations, manufacturing supplies and demand information. The third challenge in supply chain management is scalability, which continues to be a significant barrier to adoption. As the amount of transactions in a supply chain tends to increase along with the number of nodes in a network. So scalability is essential for blockchain adoption in supply chain networks. This thesis seeks to address the challenges of privacy, scalability and trust by providing frameworks for how to effectively combine blockchains with supply chains. This thesis makes four novel contributions. It first develops a blockchain-based framework with Attribute-Based Access Control (ABAC) model to assure data privacy by adopting a distributed framework to enable fine grained, dynamic access control management for supply chain management. To solve the data privacy challenge, AccessChain is developed. This proposed AccessChain model has two types of ledgers in the system: local and global. Local ledgers are used to store business contracts between stakeholders and the ABAC model management, whereas the global ledger is used to record transaction data. AccessChain can enable decentralized, fine-grained and dynamic access control management in SCM when combined with the ABAC model and blockchain technology (BCT). The framework enables a systematic approach that advantages the supply chain, and the experiments yield convincing results. Furthermore, the results of performance monitoring shows that AccessChain’s response time with four local ledgers is acceptable, and therefore it provides significantly greater scalability. Next, a framework for reducing the bullwhip effect (BWE) in SCM is proposed. The framework also focuses on combining data visibility with trust. BWE is first observed in SC and then a blockchain architecture design is used to minimize it. Full sharing of demand data has been shown to help improve the robustness of overall performance in a multiechelon SC environment, especially for BWE mitigation and cumulative cost reduction. It is observed that when it comes to providing access to data, information sharing using a blockchain has some obvious benefits in a supply chain. Furthermore, when data sharing is distributed, parties in the supply chain will have fair access to other parties’ data, even though they are farther downstream. Sharing customer demand is important in a supply chain to enhance decision-making, reduce costs and promote the final end product. This work also explores the ability of BCT as a solution in a distributed ledger approach to create a trust-enhanced environment where trust is established so that stakeholders can share their information effectively. To provide visibility and coordination along with a blockchain consensus process, a new consensus algorithm, namely Reputation-based proof-of cooperation (RPoC), is proposed for blockchain-based SCM, which does not involve validators to solve any mathematical puzzle before storing a new block. The RPoC algorithm is an efficient and scalable consensus algorithm that selects the consensus node dynamically and permits a large number of nodes to participate in the consensus process. The algorithm decreases the workload on individual nodes while increasing consensus performance by allocating the transaction verification process to specific nodes. Through extensive theoretical analyses and experimentation, the suitability of the proposed algorithm is well grounded in terms of scalability and efficiency.
The thesis concludes with a blockchain-enabled framework that addresses the issue of preserving privacy and security for an open-bid auction system. This work implements a bid management system in a private BC environment to provide a secure bidding scheme. The novelty of this framework derives from an enhanced approach for integrating BC structures by replacing the original chain structure with a tree structure. Throughout the online world, user privacy is a primary concern, because the electronic environment enables the collection of personal data. Hence a suitable cryptographic protocol for an open-bid auction atop BC is proposed. Here the primary aim is to achieve security and privacy with greater efficiency, which largely depends on the effectiveness of the encryption algorithms used by BC. Essentially this work considers Elliptic Curve Cryptography (ECC) and a dynamic cryptographic accumulator encryption algorithm to enhance security between auctioneer and bidder. The proposed e-bidding scheme and the findings from this study should foster the further growth of BC strategies
Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches
Traditional networking devices support only fixed features and limited configurability.
Network softwarization leverages programmable software and hardware platforms to remove those limitations.
In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms.
This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0.
P4 is the most popular technology to implement programmable data planes.
However, programmable data planes, and in particular, the P4 technology, emerged only recently.
Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking.
The research of this thesis focuses on two open issues of programmable data planes.
First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet.
Second, it enables BIER in high-performance P4 data planes.
BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet.
The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study.
Two more peer-reviewed papers contain additional content that is not directly related to the main results.
They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts
Towards a human-centric data economy
Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming
a key production factor, comparable in importance to capital, land, or labour in an increasingly
digital economy. In spite of an ever-growing demand for third-party data in the B2B
market, firms are generally reluctant to share their information. This is due to the unique characteristics
of “data” as an economic good (a freely replicable, non-depletable asset holding a highly
combinatorial and context-specific value), which moves digital companies to hoard and protect
their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise
the provision of innovative services built upon them. As a result, most of those valuable assets
still remain unexploited in corporate silos nowadays.
This situation is shaping the so-called data economy around a number of champions, and it is
hampering the benefits of a global data exchange on a large scale. Some analysts have estimated
the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking
the value of data has become a central policy of the European Union, which also estimated
the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of
the European Data Strategy, the European Commission is also steering relevant initiatives aimed
to identify relevant cross-industry use cases involving different verticals, and to enable sovereign
data exchanges to realise them.
Among individuals, the massive collection and exploitation of personal data by digital firms
in exchange of services, often with little or no consent, has raised a general concern about privacy
and data protection. Apart from spurring recent legislative developments in this direction,
this concern has raised some voices warning against the unsustainability of the existing digital
economics (few digital champions, potential negative impact on employment, growing inequality),
some of which propose that people are paid for their data in a sort of worldwide data labour
market as a potential solution to this dilemma [114, 115, 155].
From a technical perspective, we are far from having the required technology and algorithms
that will enable such a human-centric data economy. Even its scope is still blurry, and the question
about the value of data, at least, controversial. Research works from different disciplines have
studied the data value chain, different approaches to the value of data, how to price data assets,
and novel data marketplace designs. At the same time, complex legal and ethical issues with
respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets
over the Internet. We carry out what is, to the best of our understanding, the most thorough survey
of commercial data marketplaces. In this work, we have catalogued and characterised ten different
business models, including those of personal information management systems, companies born
in the wake of recent data protection regulations and aiming at empowering end users to take
control of their data. We have also identified the challenges faced by different types of entities,
and what kind of solutions and technology they are using to provide their services.
Then we present a first of its kind measurement study that sheds light on the prices of data
in the market using a novel methodology. We study how ten commercial data marketplaces categorise
and classify data assets, and which categories of data command higher prices. We also
develop classifiers for comparing data products across different marketplaces, and we study the
characteristics of the most valuable data assets and the features that specific vendors use to set
the price of their data products. Based on this information and adding data products offered by
other 33 data providers, we develop a regression analysis for revealing features that correlate with
prices of data products. As a result, we also implement the basic building blocks of a novel data
pricing tool capable of providing a hint of the market price of a new data product using as inputs
just its metadata. This tool would provide more transparency on the prices of data products in
the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of
nascent markets.
Next we turn to topics related to data marketplace design. Particularly, we study how buyers
can select and purchase suitable data for their tasks without requiring a priori access to such
data in order to make a purchase decision, and how marketplaces can distribute payoffs for a
data transaction combining data of different sources among the corresponding providers, be they
individuals or firms. The difficulty of both problems is further exacerbated in a human-centric
data economy where buyers have to choose among data of thousands of individuals, and where
marketplaces have to distribute payoffs to thousands of people contributing personal data to a
specific transaction.
Regarding the selection process, we compare different purchase strategies depending on the
level of information available to data buyers at the time of making decisions. A first methodological
contribution of our work is proposing a data evaluation stage prior to datasets being selected
and purchased by buyers in a marketplace. We show that buyers can significantly improve the
performance of the purchasing process just by being provided with a measurement of the performance
of their models when trained by the marketplace with individual eligible datasets. We
design purchase strategies that exploit such functionality and we call the resulting algorithm Try
Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to
near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N)
- needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value
of spatio-temporal datasets combined in marketplaces for predicting transportation demand and
travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and
New York we show that the value of data is different for each individual, and cannot be approximated
by its volume. Our results reveal that even more complex approaches based on the
“leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value
from economics and game theory, such as the Shapley value, need to be employed if one wishes
to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms.
However, the Shapley value entails serious computational challenges. Its exact calculation
requires repetitively training and evaluating every combination of data sources and hence O(N!)
or O(2N) computational time, which is unfeasible for complex models or thousands of individuals.
Moreover, our work paves the way to new methods of measuring the value of spatio-temporal
data. We identify heuristics such as entropy or similarity to the average that show a significant
correlation with the Shapley value and therefore can be used to overcome the significant computational
challenges posed by Shapley approximation algorithms in this specific context.
We conclude with a number of open issues and propose further research directions that leverage
the contributions and findings of this dissertation. These include monitoring data transactions
to better measure data markets, and complementing market data with actual transaction prices
to build a more accurate data pricing tool. A human-centric data economy would also require
that the contributions of thousands of individuals to machine learning tasks are calculated daily.
For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff
calculation processes in data marketplaces. In that direction, we also point to some alternatives
to repetitively training and evaluating a model to select data based on Try Before You Buy and
approximate the Shapley value. Finally, we discuss the challenges and potential technologies that
help with building a federation of standardised data marketplaces.
The data economy will develop fast in the upcoming years, and researchers from different
disciplines will work together to unlock the value of data and make the most out of it. Maybe
the proposal of getting paid for our data and our contribution to the data economy finally flies,
or maybe it is other proposals such as the robot tax that are finally used to balance the power
between individuals and tech firms in the digital economy. Still, we hope our work sheds light on
the value of data, and contributes to making the price of data more transparent and, eventually, to
moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
A Generic Construction of an Anonymous Reputation System and Instantiations from Lattices
With an anonymous reputation system one can realize the process of rating sellers anonymously in an online shop.
While raters can stay anonymous, sellers still have the guarantee that they can be only be reviewed by raters who bought their product.
We present the first generic construction of a reputation system from basic building blocks, namely digital signatures, encryption schemes, non-interactive zero-knowledge proofs, and linking indistinguishable tags.
We then show the security of the reputation system in a strong security model.
Among others, we instantiate the generic construction with building blocks based on lattice problems, leading to the first module lattice-based reputation system
Research Philosophy of Modern Cryptography
Proposing novel cryptography schemes (e.g., encryption, signatures, and protocols) is one of the main research goals in modern cryptography. In this paper, based on more than 800 research papers since 1976 that we have surveyed, we introduce the research philosophy of cryptography behind these papers. We use ``benefits and ``novelty as the keywords to introduce the research philosophy of proposing new schemes, assuming that there is already one scheme proposed for a cryptography notion. Next, we introduce how benefits were explored in the literature and we have categorized the methodology into 3 ways for benefits, 6 types of benefits, and 17 benefit areas. As examples, we introduce 40 research strategies within these benefit areas that were invented in the literature. The introduced research strategies have covered most cryptography schemes published in top-tier cryptography conferences
Ibex: Privacy-preserving ad conversion tracking and bidding (full version)
This paper introduces Ibex, an advertising system that reduces the amount of data that is collected on users while still allowing advertisers to bid on real-time ad auctions and measure the effectiveness of their ad campaigns. Specifically, Ibex addresses an issue in recent proposals such as Google’s Privacy Sandbox Topics API in which browsers send information about topics that are of interest to a user to advertisers and demand-side platforms (DSPs). DSPs use this information to (1) determine how much to bid on the auction for a user who is interested in particular topics, and (2) measure how well their ad campaign does for a given audience (i.e., measure conversions). While Topics and related proposals reduce the amount of user information that is exposed, they still reveal user preferences. In Ibex, browsers send user information in an encrypted form that still allows DSPs and advertisers to measure conversions, compute aggregate statistics such as histograms about users and their interests, and obliviously bid on auctions without learning for whom they are bidding. Our implementation of Ibex shows that creating histograms is 1.7–2.5× more expensive for browsers than disclosing user information, and Ibex’s oblivious bidding protocol can finish auctions within 550 ms. We think this makes Ibex capable of preserving a good experience while improving user privacy
Archives and Records
This open access book addresses the protection of privacy and personality rights in public records, records management, historical sources, and archives; and historical and current access to them in a broad international comparative perspective. Considering the question “can archiving pose a security risk to the protection of sensitive data and human rights?”, it analyses data security and presents several significant cases of the misuse of sensitive personal data, such as census data or medical records. It examines archival inflation and the minimisation and reduction of data in public records and archives, including data anonymisation and pseudonymisation, and the risks of deanonymisation and reidentification of persons. The book looks at post-mortem privacy protection, the relationship of the right to know and the right to be forgotten and introduces a specific model of four categories of the right to be forgotten. In its conclusion, the book presents a set of recommendations for archives and records management
- …