238 research outputs found

    Beyond the Last Touch: Attribution in Online Advertising

    Get PDF
    Online advertisers often utilize multiple publishers to deliver ads to multi-homing consumers. These ads often generate externalities and their exposure is uncertain, which impacts advertising effectiveness across publishers. We analytically analyze the inefficiencies created by externalities and uncertainty when information is symmetric between advertisers and publishers, in contrast to most previous research that assumes information asymmetry. Although these inefficiencies cannot be resolved through publisher side actions, attribution methods that measure the campaign uncertainty can serve as an alternative solution to help advertisers adjust their strategies. Attribution creates a virtual competition between publishers, resulting in a team compensation problem. The equilibrium may potentially increase the aggressiveness of advertiser bidding leading to increased advertiser profits. The popular last-touch method is shown to over-incentivize ad exposures, often resulting in lowering advertiser profits. The Shapley value achieves an increase in profits compared to last-touch. Popular publishers and those that appear early in the conversion funnel benefit the most from advertisers using last-touch attribution. The increase in advertiser profits come at the expense of total publisher profits and often results in decreased ad allocation efficiency. We also find that the prices paid in the market will decrease when more sophisticated attribution methods are adopted

    Principal Stratification for Advertising Experiments

    Full text link
    Advertising experiments often suffer from noisy responses making precise estimation of the average treatment effect (ATE) and evaluating ROI difficult. We develop a principal stratification model that improves the precision of the ATE by dividing the customers into three strata - those who buy regardless of ad exposure, those who buy only if exposed to ads and those who do not buy regardless. The method decreases the variance of the ATE by separating out the typically large share of customers who never buy and therefore have individual treatment effects that are exactly zero. Applying the procedure to 5 catalog mailing experiments with sample sizes around 140,000 shows a reduction of 36-57% in the variance of the estimate. When we include pre-randomization covariates that predict stratum membership, we find that estimates of customers' past response to similar advertising are a good predictor of stratum membership, even if such estimates are biased because past advertising was targeted. Customers who have not purchased recently are also more likely to be in the "never purchase" stratum. We provide simple summary statistics that firms can compute from their own experiment data to determine if the procedure is expected to be beneficial before applying it

    Influence or Advertise: The Role of Social Learning in Influencer Marketing

    Get PDF
    We compare influencer marketing to targeted advertising from information aggregation and product awareness perspectives. Influencer marketing leverages network effects by allowing consumers to socially learn from each other about their experienced content utility, but consumers may not know whether to attribute promotional post popularity to high content or high product quality. If the quality of a product is uncertain (e.g., it belongs to an unknown brand), then a mega influencer with consistent content quality fosters more information aggregation than a targeted ad and thereby yields higher profits. When we compare influencer marketing to untargeted ad campaigns or if the product has low quality uncertainty (e.g., belongs to an established brand), then many micro influencers with inconsistent content quality create more consumer awareness and yield higher profits. For products with low quality uncertainty, the firm wants to avoid information aggregation as it disperses posterior beliefs of consumers and leads to fewer purchases at the optimal price. Our model can also explain why influencer campaigns either go viral or go bust, and how for niche products, micro-influencers with consistent content quality can be a valuable marketing tool

    Zero-Knowledge Proofs of Proximity

    Get PDF
    Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language). In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input. Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist. Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier

    Panel 13 How Will Mega-Packages Change the Shape of Computing and Organizations?

    Get PDF
    SAP and other enterprise resource planning packages are rapidly penetrating many corporate environments. These packages are already dominant in several industries, e.g., petrochemicals, semiconductors, personal computers, and consumer products, and they are entering several other industries such as financial services, health care, and the public sector. These “mega- packages” have the potential of helping organizations achieve higher levels of cross-functional and cross-geography integration than ever before. They also present technological, project management, and organizational change challenges that have never been faced by most organizations

    National Commission on Social, Emotional, and Academic Development: A Practice Agenda in Support of How Learning Happens

    Get PDF
    Learning is a social, emotional, and academic endeavor, but the ways we approach learning and development do not always reflect this reality. This document features practice recommendations that seek to provide a framework through which key voices - students, teachers, families, after-school and youth development organizations - can work together to create learning environments that foster the comprehensive development of all young people
    • …
    corecore