654 research outputs found
Spectrum auctions: designing markets to benefit the public, industry and the economy
Access to the radio spectrum is vital for modern digital communication. It is an essential component for smartphone capabilities, the Cloud, the Internet of Things, autonomous vehicles, and multiple other new technologies. Governments use spectrum auctions to decide which companies should use what parts of the radio spectrum. Successful auctions can fuel rapid innovation in products and services, unlock substantial economic benefits, build comparative advantage across all regions, and create billions of dollars of government revenues. Poor auction strategies can leave bandwidth unsold and delay innovation, sell national assets to firms too cheaply, or create uncompetitive markets with high mobile prices and patchy coverage that stifles economic growth. Corporate bidders regularly complain that auctions raise their costs, while government critics argue that insufficient revenues are raised. The cross-national record shows many examples of both highly successful auctions and miserable failures. Drawing on experience from the UK and other countries, senior regulator Geoffrey Myers explains how to optimise the regulatory design of auctions, from initial planning to final implementation. Spectrum Auctions offers unrivalled expertise for regulators and economists engaged in practical auction design or company executives planning bidding strategies. For applied economists, teachers, and advanced students this book provides unrivalled insights in market design and public management. Providing clear analytical frameworks, case studies of auctions, and stage-by-stage advice, it is essential reading for anyone interested in designing public-interested and successful spectrum auctions
Towards a human-centric data economy
Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming
a key production factor, comparable in importance to capital, land, or labour in an increasingly
digital economy. In spite of an ever-growing demand for third-party data in the B2B
market, firms are generally reluctant to share their information. This is due to the unique characteristics
of “data” as an economic good (a freely replicable, non-depletable asset holding a highly
combinatorial and context-specific value), which moves digital companies to hoard and protect
their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise
the provision of innovative services built upon them. As a result, most of those valuable assets
still remain unexploited in corporate silos nowadays.
This situation is shaping the so-called data economy around a number of champions, and it is
hampering the benefits of a global data exchange on a large scale. Some analysts have estimated
the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking
the value of data has become a central policy of the European Union, which also estimated
the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of
the European Data Strategy, the European Commission is also steering relevant initiatives aimed
to identify relevant cross-industry use cases involving different verticals, and to enable sovereign
data exchanges to realise them.
Among individuals, the massive collection and exploitation of personal data by digital firms
in exchange of services, often with little or no consent, has raised a general concern about privacy
and data protection. Apart from spurring recent legislative developments in this direction,
this concern has raised some voices warning against the unsustainability of the existing digital
economics (few digital champions, potential negative impact on employment, growing inequality),
some of which propose that people are paid for their data in a sort of worldwide data labour
market as a potential solution to this dilemma [114, 115, 155].
From a technical perspective, we are far from having the required technology and algorithms
that will enable such a human-centric data economy. Even its scope is still blurry, and the question
about the value of data, at least, controversial. Research works from different disciplines have
studied the data value chain, different approaches to the value of data, how to price data assets,
and novel data marketplace designs. At the same time, complex legal and ethical issues with
respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets
over the Internet. We carry out what is, to the best of our understanding, the most thorough survey
of commercial data marketplaces. In this work, we have catalogued and characterised ten different
business models, including those of personal information management systems, companies born
in the wake of recent data protection regulations and aiming at empowering end users to take
control of their data. We have also identified the challenges faced by different types of entities,
and what kind of solutions and technology they are using to provide their services.
Then we present a first of its kind measurement study that sheds light on the prices of data
in the market using a novel methodology. We study how ten commercial data marketplaces categorise
and classify data assets, and which categories of data command higher prices. We also
develop classifiers for comparing data products across different marketplaces, and we study the
characteristics of the most valuable data assets and the features that specific vendors use to set
the price of their data products. Based on this information and adding data products offered by
other 33 data providers, we develop a regression analysis for revealing features that correlate with
prices of data products. As a result, we also implement the basic building blocks of a novel data
pricing tool capable of providing a hint of the market price of a new data product using as inputs
just its metadata. This tool would provide more transparency on the prices of data products in
the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of
nascent markets.
Next we turn to topics related to data marketplace design. Particularly, we study how buyers
can select and purchase suitable data for their tasks without requiring a priori access to such
data in order to make a purchase decision, and how marketplaces can distribute payoffs for a
data transaction combining data of different sources among the corresponding providers, be they
individuals or firms. The difficulty of both problems is further exacerbated in a human-centric
data economy where buyers have to choose among data of thousands of individuals, and where
marketplaces have to distribute payoffs to thousands of people contributing personal data to a
specific transaction.
Regarding the selection process, we compare different purchase strategies depending on the
level of information available to data buyers at the time of making decisions. A first methodological
contribution of our work is proposing a data evaluation stage prior to datasets being selected
and purchased by buyers in a marketplace. We show that buyers can significantly improve the
performance of the purchasing process just by being provided with a measurement of the performance
of their models when trained by the marketplace with individual eligible datasets. We
design purchase strategies that exploit such functionality and we call the resulting algorithm Try
Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to
near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N)
- needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value
of spatio-temporal datasets combined in marketplaces for predicting transportation demand and
travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and
New York we show that the value of data is different for each individual, and cannot be approximated
by its volume. Our results reveal that even more complex approaches based on the
“leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value
from economics and game theory, such as the Shapley value, need to be employed if one wishes
to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms.
However, the Shapley value entails serious computational challenges. Its exact calculation
requires repetitively training and evaluating every combination of data sources and hence O(N!)
or O(2N) computational time, which is unfeasible for complex models or thousands of individuals.
Moreover, our work paves the way to new methods of measuring the value of spatio-temporal
data. We identify heuristics such as entropy or similarity to the average that show a significant
correlation with the Shapley value and therefore can be used to overcome the significant computational
challenges posed by Shapley approximation algorithms in this specific context.
We conclude with a number of open issues and propose further research directions that leverage
the contributions and findings of this dissertation. These include monitoring data transactions
to better measure data markets, and complementing market data with actual transaction prices
to build a more accurate data pricing tool. A human-centric data economy would also require
that the contributions of thousands of individuals to machine learning tasks are calculated daily.
For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff
calculation processes in data marketplaces. In that direction, we also point to some alternatives
to repetitively training and evaluating a model to select data based on Try Before You Buy and
approximate the Shapley value. Finally, we discuss the challenges and potential technologies that
help with building a federation of standardised data marketplaces.
The data economy will develop fast in the upcoming years, and researchers from different
disciplines will work together to unlock the value of data and make the most out of it. Maybe
the proposal of getting paid for our data and our contribution to the data economy finally flies,
or maybe it is other proposals such as the robot tax that are finally used to balance the power
between individuals and tech firms in the digital economy. Still, we hope our work sheds light on
the value of data, and contributes to making the price of data more transparent and, eventually, to
moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
Open Problems in DAOs
Decentralized autonomous organizations (DAOs) are a new, rapidly-growing
class of organizations governed by smart contracts. Here we describe how
researchers can contribute to the emerging science of DAOs and other
digitally-constituted organizations. From granular privacy primitives to
mechanism designs to model laws, we identify high-impact problems in the DAO
ecosystem where existing gaps might be tackled through a new data set or by
applying tools and ideas from existing research fields such as political
science, computer science, economics, law, and organizational science. Our
recommendations encompass exciting research questions as well as promising
business opportunities. We call on the wider research community to join the
global effort to invent the next generation of organizations
On the centrality analysis of covert networks using games with externalities
The identification of the most potentially hazardous agents in a terrorist organisation helps to prevent further attacks by effectively allocating surveillance resources and destabilising the covert network to which they belong. In this paper, several mechanisms for the overall ranking of covert networks members in a general framework are addressed based on their contribution to the overall relative effectiveness in the event of a merger. In addition, the possible organisation of agents outside of each possible merger naturally influences their relative effectiveness score, which motivates the innovative use of games in partition function form and specific ranking indices for individuals. Finally, we apply these methods to analyse the effectiveness of the hijackers of the covert network supporting the 9/11 attacksThis work is part of the R+D+I project grants MTM2017-87197-C3-3-P and PID2021-124030NB-C32, funded byMCIN/AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”/EU. This research was also funded by Grupos de Referencia Competitiva ED431C-2021/24 from the Consellería de Cultura, Educación e Universidades, Xunta de Galicia.S
Reform for Sale : a Common Agency Model with Moral Hazard Frictions
Lobbying competition is viewed as a delegated common agency game under moral hazard. Several interest groups try to influence a policy-maker who exerts effort to increase the probability that a reform be implemented. With no restriction on the space of contribution schedules, all equilibria perfectly reflect the principals’ preferences over alternatives. As a result, lobbying competition reaches efficiency. Unfortunately, such equilibria require that the policy-maker pays an interest group when the latter is hurt by the reform. When payments remain non-negative, inducing effort requires leaving a moral hazard rent to the decision-maker. Contributions schedules no longer reflect the principals preferences, and the unique equilibrium is inefficient. Free-riding across congruent groups arises and the set of groups active at equilibrium is endogenously derived. Allocative efficiency and redistribution of the aggregate surplus are linked altogether and both depend on the set of active principals, as well as on the groups size
The Political Elite, Self-Interest and Democratization: The case of the Netherlands, 1870-1920
This dissertation consists of various studies that investigate the influence of political elites' incentives on their decision-making. I investigate the relationship between politicians and the pursuit of self-interest by focusing on arguably the most obvious proxy for self-interest: politicians' personal wealth. In chapter 2, I introduce the setting for the remainder of the dissertation: the Dutch political elite in the late nineteenth and early twentieth century. This period saw a radical economic, but also political change. This period arguably represents the country's transition from 'extractive' to 'inclusive' institutions, featuring rapid economic growth, and democratizing institutions.This chapter introduces the data on the wealth of the Dutch political elite, coming from newly-collected archival data on probate inventories. I document a pattern of extremely high wealth among politicians, up until the 1890's, after which the political elite's wealth declines slowly over time. This change in politicians' personal wealth coincides with the acceptance of important fiscal reforms. Even after several decades, and several suffrage extensions, the political elite remains extremely wealthy in comparison to the general population. The next chapter investigates the influence of politicians' personal wealth on the tendency to vote in favor of various far-reaching reforms. In particular, I focus on fiscal reforms, and on suffrage extensions. I leverage the fact that the fiscal reforms were progressive, such that wealthier politicians' expected future tax burden was higher than that of less wealthy politicians. I hypothesize that wealthier politicians are less likely to accept these laws. In the case of suffrage extensions, I hypothesize there is no effect of personal wealth. To establish causality, I make use of variation in the expected inheritance among politicians, and use exogenous variation in politicians' fathers' profession. The analyses show that there is an influence of personal wealth on the tendency to vote in favor of fiscal legislation, but there is no effect for suffrage extensions. The magnitude of the results is such that, had politicians collectively been wealthier by a factor of approx. 5 at the time of voting, many of the currently accepted laws would have been rejected. The counterfactuals imply that some rejected laws would have been accepted if parliament had been poorer at the time. Chapter 4 exploits a setting to look at the influence of a political career on politicians' personal wealth. By using data on candidates over a period of around 70 years, I investigate the influence of being elected an additional time on personal wealth, and I use a method to decompose these effects into ceteris paribus effects of the additional term in political office, and averages of future incumbency advantages and ceteris paribus effects. My results show that politicians are only able to accrue returns in the first period. In subsequent periods, there is no additional financial benefit to a political career. I dispell various alternative explanations, such as the selection of fair politicians, changes in consumption patterns, or career paths into lucrative functions post-political career. I provide evidence that the establishment of political parties has disciplined politicians
Operational research:methods and applications
Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order
- …