1,319 research outputs found

    Statistical Arbitrage Mining for Display Advertising

    Full text link
    We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A/B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.Comment: In the proceedings of the 21st ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2015

    A Model of Vertical Oligopolistic Competition

    Get PDF
    This paper develops a model of successive oligopolies with endogenous market entry, allowing for varying degrees of product differentiation and entry costs in both markets. Our analysis shows that the downstream conditions dominate the overall profitability of the two-tier structure while the upstream conditions mainly affect the distribution of profits. We compare the welfare effects of upstream versus downstream deregulation policies and show that the impact of deregulation may be overvalued when ignoring feedback effects from the other market. Furthermore, we analyze how different forms of vertical restraints influence the endogenous market structure and show when they are welfare enhancing

    License prices for financially constrained firms

    Get PDF
    It is often alleged that high auction prices inhibit service deployment. We investigate this claim under the extreme case of financially constrained bidders. If demand is just slightly elastic, auctions maximize consumer surplus if consumer surplus is a convex function of quantity (a common assumption), or if consumer surplus is concave and the proportion of expenditure spent on deployment is greater than one over the elasticity of demand. The latter condition appears to be true for most of the large telecom auctions in the US and Europe. Thus, even if high auction prices inhibit service deployment, auctions appear to be optimal from the consumers’ point of view

    The Business of Process Integration

    Full text link

    Towards a synthesized critique of neoliberal biodiversity conservation

    Get PDF
    During the last three decades, the arena of biodiversity conservation has largely aligned itself with the globally dominant political ideology of neoliberalism and associated governmentalities. Schemes such as payments for ecological services are promoted to reach the multiple ‘wins’ so desired: improved biodiversity conservation, economic development, (international) cooperation and poverty alleviation, amongst others. While critical scholarship with respect to understanding the linkages between neoliberalism, capitalism and the environment has a long tradition, a synthesized critique of neoliberal conservation - the ideology (and related practices) that the salvation of nature requires capitalist expansion - remains lacking. This paper aims to provide such a critique. We commence with the assertion that there has been a conflation between ‘economics’ and neoliberal ideology in conservation thinking and implementation. As a result, we argue, it becomes easier to distinguish the main problems that neoliberal win-win models pose for biodiversity conservation. These are framed around three points: the stimulation of contradictions; appropriation and misrepresentation and the disciplining of dissent. Inspired by Bruno Latour’s recent ‘compositionist manifesto’, the conclusion outlines some ideas for moving beyond critique

    Does \u2018bigger\u2019mean \u2018better\u2019? Pitfalls and shortcuts associated with big data for social research

    Get PDF
    \u2018Big data is here to stay.\u2019 This key statement has a double value: is an assumption as well as the reason why a theoretical reflection is needed. Furthermore, Big data is something that is gaining visibility and success in social sciences even, overcoming the division between humanities and computer sciences. In this contribution some considerations on the presence and the certain persistence of Big data as a socio-technical assemblage will be outlined. Therefore, the intriguing opportunities for social research linked to such interaction between practices and technological development will be developed. However, despite a promissory rhetoric, fostered by several scholars since the birth of Big data as a labelled concept, some risks are just around the corner. The claims for the methodological power of bigger and bigger datasets, as well as increasing speed in analysis and data collection, are creating a real hype in social research. Peculiar attention is needed in order to avoid some pitfalls. These risks will be analysed for what concerns the validity of the research results \u2018obtained through Big data. After a pars distruens, this contribution will conclude with a pars construens; assuming the previous critiques, a mixed methods research design approach will be described as a general proposal with the objective of stimulating a debate on the integration of Big data in complex research projecting

    Social Interactions vs Revisions, What is important for Promotion in Wikipedia?

    Full text link
    In epistemic community, people are said to be selected on their knowledge contribution to the project (articles, codes, etc.) However, the socialization process is an important factor for inclusion, sustainability as a contributor, and promotion. Finally, what does matter to be promoted? being a good contributor? being a good animator? knowing the boss? We explore this question looking at the process of election for administrator in the English Wikipedia community. We modeled the candidates according to their revisions and/or social attributes. These attributes are used to construct a predictive model of promotion success, based on the candidates's past behavior, computed thanks to a random forest algorithm. Our model combining knowledge contribution variables and social networking variables successfully explain 78% of the results which is better than the former models. It also helps to refine the criterion for election. If the number of knowledge contributions is the most important element, social interactions come close second to explain the election. But being connected with the future peers (the admins) can make the difference between success and failure, making this epistemic community a very social community too
    • 

    corecore