191 research outputs found

    Applications of econometrics and machine learning to development and international economics

    Get PDF
    In the first chapter, I explore whether features derived from high resolution satellite images of Sri Lanka are able to predict poverty or income at local areas. I extract from satellite imagery area specific indicators of economic well-being including the number of cars, type and extent of crops, length and type of roads, roof extent and roof type, building height and number of buildings. Estimated models are able to explain between 60 to 65 percent of the village-specific variation in poverty and average level of log income. The second chapter investigates the effects of preferential trade programs such as the U.S. African Growth and Opportunity Act (AGOA) on the direction of African countries’ exports. While these programs intend to promote African exports, textbook models of trade suggest that such asymmetric tariff reductions could divert African exports from other destinations to the tariff reducing economy. I examine the import patterns of 177 countries and estimate the diversion effect using a triple-difference estimation strategy, which exploits time variation in the product and country coverage of AGOA. I find no evidence of systematic trade diversion within Africa, but do find evidence of diversion from other industrialized destinations, particularly for apparel products. In the third chapter I apply three model selection methods – Lasso regularized regression, Bayesian Model Averaging, and Extreme Bound Analysis -- to candidate variables in a gravity models of trade. I use a panel dataset of of 198 countries covering the years 1970 to 2000, and find model selection methods suggest many fewer variables are robust that those suggested by the null hypothesis rejection methodology from ordinary least squares

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue

    Research on formation of strategic alliance and its effect on container lines’ efficiency

    Get PDF

    Internal migration, remittances and household welfare: evidence from South Africa

    Get PDF
    Includes bibliographical references.In this thesis, I investigate the economic linkages between internal labour migration and the welfare of migrant-sending households and communities. The analysis is couched in the new economics of labour migration theory, which recognises the familial participation in migration decisions and therefore the potential role of economic linkages between migrants and their original households

    Interdependencies:essays on cross-shareholdings, social networks, and sectoral linkages

    Get PDF

    The Pearson Commission, Aid Diplomacy and the Rise of the World Bank, 1966-1970

    Get PDF
    This thesis uses a focus on the Pearson Commission to explore some of the policy and institutional dynamics of international development aid during the later 1960s and early 1970s. It sets these explorations within a theoretical framework and an historical context. Firstly it draws on the theory of ‘international regimes’ created by international relations scholars. While acknowledging the importance of economic and military power balances, regime theorists also argue that the nature of international policy-making is partially defined by, ‘principles, rules, norms and processes’ which shape how policy-makers act. Using political science theory, the thesis identifies three groups who create and shape these regimes: elites, epistemic communities and bureaucrats. Through a close focus on the dynamics at play within the Pearson Commission’s creation, operation and reception, the main body of the thesis will identify how a small group of individuals, such as William Clark and Barbara Ward, acted to coordinate sections of these three groups within an ‘aid community’ as the international aid regime changed in the late 1960s and early 1970s. It is argued that specific changes within this regime, including the emergence of the World Bank as a technical leader on aid matters, the establishment of the 0.7 per cent aid volume target, and the creation of a definition of official development assistance (ODA), can be attributed to the workings of this community. This concept of a fractious and fragile aid community is used to challenge accounts of this period which emphasise the inexorability of the rise of the World Bank, or prioritise the importance of ideas and knowledge in explaining the changes in the aid regime

    Inequality in the Developing World

    Get PDF
    Inequality has emerged as a key development challenge. It holds implications for economic growth and redistribution and translates into power asymmetries that can endanger human rights, create conflict, and embed social exclusion and chronic poverty. For these reasons, it underpins intense public and academic debates and has become a dominant policy concern within many countries and in all multilateral agencies. It is at the core of the seventeen goals of the UN 2030 Agenda for Sustainable Development. This book contributes to this important discussion by presenting assessments of the measurement and analysis of global inequality by leading inequality scholars, aligning these to comprehensive reviews of inequality trends in five of the world’s largest developing countries—Brazil, China, India, Mexico, and South Africa. Each is a persistently high or newly high inequality context and, with the changing global inequality situation as context, country chapters investigate the main factors shaping their different inequality dynamics. Particular attention is on how broader societal inequalities arising outside of the labour market have intersected with the rapidly changing labour market milieus of the last few decades. Collectively these chapters provide a nuanced discussion of key distributive phenomena like the high concentration of income among the most affluent people, gender inequalities, and social mobility. Substantive tax and social benefit policies that each country implemented to mitigate these inequality dynamics are assessed in detail. The book takes lessons from these contexts back into the global analysis of inequality and social mobility and the policies needed to address inequality

    Achieving the Circular Economy

    Get PDF
    Urbanisation and climate change are pushing cities to find novel pathways leading to a sustainable future. The urban context may be viewed as a new experimentation space to accelerate the transition to a circular economy. Urban symbiosis and the circular economy are emerging concepts attracting more and more attention within the urban context. Moreover, new business models are emerging around sharing and peer-to-peer practices, which are challenging existing roles of actors in society. These developments are having an important impact on the flows of resources and the use of the city infrastructure, and each research area has taken a different perspective in the analysis of such impacts. This Special Issue aims to explore what a “circular city” could constitute and how and why cities engage in circularity. This Special Issue includes seven high-quality papers on the theories and practices of circular cities. Actors, concepts, methods, tools, the barriers to and enablers of circular cities are discussed and a solid base and inspiration for the future development of circular cities are provided

    Measuring the Business Value of Cloud Computing

    Get PDF
    The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. Recent reviews of extant literature highlights the need for multi-disciplinary research. This research should explore and further develops the conceptualization of value in cloud computing research. In addition, there is a need for research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios. This open access book will review the state of the art from an IS, Computer Science and Accounting perspective, will introduce and discuss the main techniques for measuring business value for cloud computing in a variety of scenarios, and illustrate these with mini-case studies
    • 

    corecore